00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 82 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3260 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.165 Fetching changes from the remote Git repository 00:00:00.168 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.227 > git --version # 'git version 2.39.2' 00:00:00.227 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.848 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.862 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.874 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:05.874 > git config core.sparsecheckout # timeout=10 00:00:05.884 > git read-tree -mu HEAD # timeout=10 00:00:05.900 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:05.916 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:05.917 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:05.982 [Pipeline] Start of Pipeline 00:00:05.996 [Pipeline] library 00:00:05.997 Loading library shm_lib@master 00:00:05.998 Library shm_lib@master is cached. Copying from home. 00:00:06.018 [Pipeline] node 00:00:06.025 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.029 [Pipeline] { 00:00:06.041 [Pipeline] catchError 00:00:06.043 [Pipeline] { 00:00:06.058 [Pipeline] wrap 00:00:06.067 [Pipeline] { 00:00:06.075 [Pipeline] stage 00:00:06.076 [Pipeline] { (Prologue) 00:00:06.264 [Pipeline] sh 00:00:06.549 + logger -p user.info -t JENKINS-CI 00:00:06.567 [Pipeline] echo 00:00:06.568 Node: CYP12 00:00:06.574 [Pipeline] sh 00:00:06.873 [Pipeline] setCustomBuildProperty 00:00:06.882 [Pipeline] echo 00:00:06.883 Cleanup processes 00:00:06.886 [Pipeline] sh 00:00:07.166 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.166 3595196 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.178 [Pipeline] sh 00:00:07.459 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.459 ++ grep -v 'sudo pgrep' 00:00:07.459 ++ awk '{print $1}' 00:00:07.459 + sudo kill -9 00:00:07.459 + true 00:00:07.476 [Pipeline] cleanWs 00:00:07.488 [WS-CLEANUP] Deleting project workspace... 00:00:07.488 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.496 [WS-CLEANUP] done 00:00:07.501 [Pipeline] setCustomBuildProperty 00:00:07.518 [Pipeline] sh 00:00:07.804 + sudo git config --global --replace-all safe.directory '*' 00:00:07.900 [Pipeline] httpRequest 00:00:07.929 [Pipeline] echo 00:00:07.930 Sorcerer 10.211.164.101 is alive 00:00:07.939 [Pipeline] httpRequest 00:00:07.944 HttpMethod: GET 00:00:07.945 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.945 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.959 Response Code: HTTP/1.1 200 OK 00:00:07.959 Success: Status code 200 is in the accepted range: 200,404 00:00:07.960 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:10.677 [Pipeline] sh 00:00:10.964 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:10.981 [Pipeline] httpRequest 00:00:11.011 [Pipeline] echo 00:00:11.013 Sorcerer 10.211.164.101 is alive 00:00:11.022 [Pipeline] httpRequest 00:00:11.028 HttpMethod: GET 00:00:11.029 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.029 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.046 Response Code: HTTP/1.1 200 OK 00:00:11.047 Success: Status code 200 is in the accepted range: 200,404 00:00:11.048 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:43.550 [Pipeline] sh 00:00:43.841 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:47.209 [Pipeline] sh 00:00:47.496 + git -C spdk log --oneline -n5 00:00:47.496 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:47.496 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:47.496 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:47.496 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:47.496 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:47.515 [Pipeline] withCredentials 00:00:47.527 > git --version # timeout=10 00:00:47.538 > git --version # 'git version 2.39.2' 00:00:47.556 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:47.558 [Pipeline] { 00:00:47.567 [Pipeline] retry 00:00:47.568 [Pipeline] { 00:00:47.584 [Pipeline] sh 00:00:47.868 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:48.142 [Pipeline] } 00:00:48.164 [Pipeline] // retry 00:00:48.169 [Pipeline] } 00:00:48.189 [Pipeline] // withCredentials 00:00:48.197 [Pipeline] httpRequest 00:00:48.210 [Pipeline] echo 00:00:48.212 Sorcerer 10.211.164.101 is alive 00:00:48.217 [Pipeline] httpRequest 00:00:48.222 HttpMethod: GET 00:00:48.222 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:48.223 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:48.225 Response Code: HTTP/1.1 200 OK 00:00:48.225 Success: Status code 200 is in the accepted range: 200,404 00:00:48.226 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:49.969 [Pipeline] sh 00:00:50.255 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:52.186 [Pipeline] sh 00:00:52.476 + git -C dpdk log --oneline -n5 00:00:52.476 eeb0605f11 version: 23.11.0 00:00:52.476 238778122a doc: update release notes for 23.11 00:00:52.476 46aa6b3cfc doc: fix description of RSS features 00:00:52.476 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:52.476 7e421ae345 devtools: support skipping forbid rule check 00:00:52.489 [Pipeline] } 00:00:52.507 [Pipeline] // stage 00:00:52.517 [Pipeline] stage 00:00:52.519 [Pipeline] { (Prepare) 00:00:52.541 [Pipeline] writeFile 00:00:52.558 [Pipeline] sh 00:00:52.844 + logger -p user.info -t JENKINS-CI 00:00:52.888 [Pipeline] sh 00:00:53.203 + logger -p user.info -t JENKINS-CI 00:00:53.240 [Pipeline] sh 00:00:53.523 + cat autorun-spdk.conf 00:00:53.523 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.523 SPDK_TEST_NVMF=1 00:00:53.523 SPDK_TEST_NVME_CLI=1 00:00:53.523 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.523 SPDK_TEST_NVMF_NICS=e810 00:00:53.523 SPDK_TEST_VFIOUSER=1 00:00:53.523 SPDK_RUN_UBSAN=1 00:00:53.523 NET_TYPE=phy 00:00:53.523 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:53.523 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:53.531 RUN_NIGHTLY=1 00:00:53.537 [Pipeline] readFile 00:00:53.566 [Pipeline] withEnv 00:00:53.568 [Pipeline] { 00:00:53.581 [Pipeline] sh 00:00:53.865 + set -ex 00:00:53.865 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:53.865 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:53.865 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.865 ++ SPDK_TEST_NVMF=1 00:00:53.865 ++ SPDK_TEST_NVME_CLI=1 00:00:53.865 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.865 ++ SPDK_TEST_NVMF_NICS=e810 00:00:53.865 ++ SPDK_TEST_VFIOUSER=1 00:00:53.865 ++ SPDK_RUN_UBSAN=1 00:00:53.865 ++ NET_TYPE=phy 00:00:53.865 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:53.865 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:53.865 ++ RUN_NIGHTLY=1 00:00:53.865 + case $SPDK_TEST_NVMF_NICS in 00:00:53.865 + DRIVERS=ice 00:00:53.865 + [[ tcp == \r\d\m\a ]] 00:00:53.865 + [[ -n ice ]] 00:00:53.865 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:53.866 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:02.009 rmmod: ERROR: Module irdma is not currently loaded 00:01:02.009 rmmod: ERROR: Module i40iw is not currently loaded 00:01:02.009 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:02.009 + true 00:01:02.009 + for D in $DRIVERS 00:01:02.009 + sudo modprobe ice 00:01:02.009 + exit 0 00:01:02.019 [Pipeline] } 00:01:02.038 [Pipeline] // withEnv 00:01:02.043 [Pipeline] } 00:01:02.057 [Pipeline] // stage 00:01:02.067 [Pipeline] catchError 00:01:02.068 [Pipeline] { 00:01:02.081 [Pipeline] timeout 00:01:02.081 Timeout set to expire in 50 min 00:01:02.082 [Pipeline] { 00:01:02.095 [Pipeline] stage 00:01:02.097 [Pipeline] { (Tests) 00:01:02.112 [Pipeline] sh 00:01:02.398 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.398 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.398 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.398 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:02.398 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.398 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:02.398 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.398 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.398 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:02.398 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.398 + source /etc/os-release 00:01:02.398 ++ NAME='Fedora Linux' 00:01:02.398 ++ VERSION='38 (Cloud Edition)' 00:01:02.398 ++ ID=fedora 00:01:02.398 ++ VERSION_ID=38 00:01:02.398 ++ VERSION_CODENAME= 00:01:02.398 ++ PLATFORM_ID=platform:f38 00:01:02.398 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:02.398 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:02.398 ++ LOGO=fedora-logo-icon 00:01:02.398 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:02.398 ++ HOME_URL=https://fedoraproject.org/ 00:01:02.398 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:02.398 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:02.398 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:02.398 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:02.398 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:02.398 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:02.398 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:02.398 ++ SUPPORT_END=2024-05-14 00:01:02.398 ++ VARIANT='Cloud Edition' 00:01:02.398 ++ VARIANT_ID=cloud 00:01:02.398 + uname -a 00:01:02.398 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:02.398 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:05.700 Hugepages 00:01:05.700 node hugesize free / total 00:01:05.700 node0 1048576kB 0 / 0 00:01:05.700 node0 2048kB 0 / 0 00:01:05.700 node1 1048576kB 0 / 0 00:01:05.700 node1 2048kB 0 / 0 00:01:05.700 00:01:05.700 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:05.700 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:05.700 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:05.700 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:05.700 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:05.700 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:05.700 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:05.960 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:05.960 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:05.960 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:05.960 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:05.960 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:05.960 + rm -f /tmp/spdk-ld-path 00:01:05.960 + source autorun-spdk.conf 00:01:05.960 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.960 ++ SPDK_TEST_NVMF=1 00:01:05.960 ++ SPDK_TEST_NVME_CLI=1 00:01:05.960 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.960 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.960 ++ SPDK_TEST_VFIOUSER=1 00:01:05.960 ++ SPDK_RUN_UBSAN=1 00:01:05.960 ++ NET_TYPE=phy 00:01:05.960 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:05.960 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:05.960 ++ RUN_NIGHTLY=1 00:01:05.960 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:05.960 + [[ -n '' ]] 00:01:05.960 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.960 + for M in /var/spdk/build-*-manifest.txt 00:01:05.960 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:05.960 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.960 + for M in /var/spdk/build-*-manifest.txt 00:01:05.960 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:05.960 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.960 ++ uname 00:01:05.960 + [[ Linux == \L\i\n\u\x ]] 00:01:05.960 + sudo dmesg -T 00:01:05.960 + sudo dmesg --clear 00:01:05.960 + dmesg_pid=3596337 00:01:05.960 + [[ Fedora Linux == FreeBSD ]] 00:01:05.960 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.960 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.960 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:05.960 + [[ -x /usr/src/fio-static/fio ]] 00:01:05.960 + export FIO_BIN=/usr/src/fio-static/fio 00:01:05.960 + FIO_BIN=/usr/src/fio-static/fio 00:01:05.960 + sudo dmesg -Tw 00:01:05.960 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:05.960 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:05.960 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:05.960 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.960 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.960 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:05.960 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.960 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.960 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.960 Test configuration: 00:01:05.960 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.960 SPDK_TEST_NVMF=1 00:01:05.960 SPDK_TEST_NVME_CLI=1 00:01:05.960 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.960 SPDK_TEST_NVMF_NICS=e810 00:01:05.960 SPDK_TEST_VFIOUSER=1 00:01:05.960 SPDK_RUN_UBSAN=1 00:01:05.960 NET_TYPE=phy 00:01:05.960 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:05.960 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:05.960 RUN_NIGHTLY=1 01:19:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:05.960 01:19:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:05.960 01:19:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:05.960 01:19:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:05.960 01:19:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.961 01:19:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.961 01:19:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.961 01:19:32 -- paths/export.sh@5 -- $ export PATH 00:01:05.961 01:19:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.961 01:19:32 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:05.961 01:19:32 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:05.961 01:19:32 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720739972.XXXXXX 00:01:05.961 01:19:32 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720739972.6vK6NY 00:01:05.961 01:19:32 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:05.961 01:19:32 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:05.961 01:19:32 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:05.961 01:19:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:06.221 01:19:32 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.221 01:19:32 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.221 01:19:32 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:06.221 01:19:32 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:06.221 01:19:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.221 01:19:32 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:06.221 01:19:32 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:06.221 01:19:32 -- pm/common@17 -- $ local monitor 00:01:06.221 01:19:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.221 01:19:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.221 01:19:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.221 01:19:32 -- pm/common@21 -- $ date +%s 00:01:06.221 01:19:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.221 01:19:32 -- pm/common@25 -- $ sleep 1 00:01:06.221 01:19:32 -- pm/common@21 -- $ date +%s 00:01:06.221 01:19:32 -- pm/common@21 -- $ date +%s 00:01:06.221 01:19:32 -- pm/common@21 -- $ date +%s 00:01:06.221 01:19:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720739972 00:01:06.221 01:19:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720739972 00:01:06.221 01:19:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720739972 00:01:06.221 01:19:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720739972 00:01:06.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720739972_collect-vmstat.pm.log 00:01:06.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720739972_collect-cpu-load.pm.log 00:01:06.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720739972_collect-cpu-temp.pm.log 00:01:06.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720739972_collect-bmc-pm.bmc.pm.log 00:01:07.161 01:19:33 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:07.161 01:19:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.161 01:19:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.161 01:19:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.161 01:19:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.161 Thu Jul 11 11:19:33 PM UTC 2024 00:01:07.161 01:19:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.161 v24.05-13-g5fa2f5086 00:01:07.161 01:19:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.161 01:19:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.161 01:19:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.161 01:19:33 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:07.161 01:19:33 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:07.161 01:19:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.161 ************************************ 00:01:07.161 START TEST ubsan 00:01:07.161 ************************************ 00:01:07.161 01:19:33 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:07.161 using ubsan 00:01:07.161 00:01:07.161 real 0m0.000s 00:01:07.161 user 0m0.000s 00:01:07.161 sys 0m0.000s 00:01:07.161 01:19:33 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:07.161 01:19:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.161 ************************************ 00:01:07.161 END TEST ubsan 00:01:07.161 ************************************ 00:01:07.161 01:19:33 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:07.161 01:19:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:07.161 01:19:33 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:07.161 01:19:33 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:07.161 01:19:33 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:07.161 01:19:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.161 ************************************ 00:01:07.161 START TEST build_native_dpdk 00:01:07.161 ************************************ 00:01:07.161 01:19:33 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:07.161 eeb0605f11 version: 23.11.0 00:01:07.161 238778122a doc: update release notes for 23.11 00:01:07.161 46aa6b3cfc doc: fix description of RSS features 00:01:07.161 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:07.161 7e421ae345 devtools: support skipping forbid rule check 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:07.161 01:19:33 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:07.161 01:19:33 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:07.421 01:19:33 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:07.421 01:19:33 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:07.421 patching file config/rte_config.h 00:01:07.421 Hunk #1 succeeded at 60 (offset 1 line). 00:01:07.421 01:19:33 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:07.422 01:19:33 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:07.422 01:19:33 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:07.422 01:19:33 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:07.422 01:19:33 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:12.711 The Meson build system 00:01:12.711 Version: 1.3.1 00:01:12.711 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:12.711 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:12.711 Build type: native build 00:01:12.711 Program cat found: YES (/usr/bin/cat) 00:01:12.711 Project name: DPDK 00:01:12.711 Project version: 23.11.0 00:01:12.711 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.711 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:12.711 Host machine cpu family: x86_64 00:01:12.711 Host machine cpu: x86_64 00:01:12.711 Message: ## Building in Developer Mode ## 00:01:12.711 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:12.711 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:12.711 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:12.711 Program python3 found: YES (/usr/bin/python3) 00:01:12.711 Program cat found: YES (/usr/bin/cat) 00:01:12.711 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:12.711 Compiler for C supports arguments -march=native: YES 00:01:12.711 Checking for size of "void *" : 8 00:01:12.711 Checking for size of "void *" : 8 (cached) 00:01:12.711 Library m found: YES 00:01:12.711 Library numa found: YES 00:01:12.711 Has header "numaif.h" : YES 00:01:12.711 Library fdt found: NO 00:01:12.711 Library execinfo found: NO 00:01:12.711 Has header "execinfo.h" : YES 00:01:12.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.711 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:12.711 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:12.711 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:12.711 Run-time dependency openssl found: YES 3.0.9 00:01:12.711 Run-time dependency libpcap found: YES 1.10.4 00:01:12.711 Has header "pcap.h" with dependency libpcap: YES 00:01:12.711 Compiler for C supports arguments -Wcast-qual: YES 00:01:12.711 Compiler for C supports arguments -Wdeprecated: YES 00:01:12.711 Compiler for C supports arguments -Wformat: YES 00:01:12.711 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:12.711 Compiler for C supports arguments -Wformat-security: NO 00:01:12.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.711 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:12.711 Compiler for C supports arguments -Wnested-externs: YES 00:01:12.711 Compiler for C supports arguments -Wold-style-definition: YES 00:01:12.711 Compiler for C supports arguments -Wpointer-arith: YES 00:01:12.711 Compiler for C supports arguments -Wsign-compare: YES 00:01:12.711 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:12.711 Compiler for C supports arguments -Wundef: YES 00:01:12.711 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.711 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:12.711 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:12.711 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.711 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:12.711 Program objdump found: YES (/usr/bin/objdump) 00:01:12.711 Compiler for C supports arguments -mavx512f: YES 00:01:12.711 Checking if "AVX512 checking" compiles: YES 00:01:12.711 Fetching value of define "__SSE4_2__" : 1 00:01:12.711 Fetching value of define "__AES__" : 1 00:01:12.711 Fetching value of define "__AVX__" : 1 00:01:12.711 Fetching value of define "__AVX2__" : 1 00:01:12.711 Fetching value of define "__AVX512BW__" : 1 00:01:12.711 Fetching value of define "__AVX512CD__" : 1 00:01:12.711 Fetching value of define "__AVX512DQ__" : 1 00:01:12.711 Fetching value of define "__AVX512F__" : 1 00:01:12.711 Fetching value of define "__AVX512VL__" : 1 00:01:12.711 Fetching value of define "__PCLMUL__" : 1 00:01:12.711 Fetching value of define "__RDRND__" : 1 00:01:12.711 Fetching value of define "__RDSEED__" : 1 00:01:12.711 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:12.711 Fetching value of define "__znver1__" : (undefined) 00:01:12.711 Fetching value of define "__znver2__" : (undefined) 00:01:12.711 Fetching value of define "__znver3__" : (undefined) 00:01:12.711 Fetching value of define "__znver4__" : (undefined) 00:01:12.711 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:12.711 Message: lib/log: Defining dependency "log" 00:01:12.711 Message: lib/kvargs: Defining dependency "kvargs" 00:01:12.711 Message: lib/telemetry: Defining dependency "telemetry" 00:01:12.711 Checking for function "getentropy" : NO 00:01:12.711 Message: lib/eal: Defining dependency "eal" 00:01:12.711 Message: lib/ring: Defining dependency "ring" 00:01:12.711 Message: lib/rcu: Defining dependency "rcu" 00:01:12.711 Message: lib/mempool: Defining dependency "mempool" 00:01:12.711 Message: lib/mbuf: Defining dependency "mbuf" 00:01:12.711 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:12.711 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:12.711 Compiler for C supports arguments -mpclmul: YES 00:01:12.711 Compiler for C supports arguments -maes: YES 00:01:12.711 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:12.711 Compiler for C supports arguments -mavx512bw: YES 00:01:12.711 Compiler for C supports arguments -mavx512dq: YES 00:01:12.711 Compiler for C supports arguments -mavx512vl: YES 00:01:12.711 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:12.711 Compiler for C supports arguments -mavx2: YES 00:01:12.711 Compiler for C supports arguments -mavx: YES 00:01:12.711 Message: lib/net: Defining dependency "net" 00:01:12.711 Message: lib/meter: Defining dependency "meter" 00:01:12.711 Message: lib/ethdev: Defining dependency "ethdev" 00:01:12.711 Message: lib/pci: Defining dependency "pci" 00:01:12.711 Message: lib/cmdline: Defining dependency "cmdline" 00:01:12.711 Message: lib/metrics: Defining dependency "metrics" 00:01:12.711 Message: lib/hash: Defining dependency "hash" 00:01:12.711 Message: lib/timer: Defining dependency "timer" 00:01:12.711 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.711 Message: lib/acl: Defining dependency "acl" 00:01:12.711 Message: lib/bbdev: Defining dependency "bbdev" 00:01:12.711 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:12.711 Run-time dependency libelf found: YES 0.190 00:01:12.711 Message: lib/bpf: Defining dependency "bpf" 00:01:12.711 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:12.711 Message: lib/compressdev: Defining dependency "compressdev" 00:01:12.711 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:12.711 Message: lib/distributor: Defining dependency "distributor" 00:01:12.711 Message: lib/dmadev: Defining dependency "dmadev" 00:01:12.711 Message: lib/efd: Defining dependency "efd" 00:01:12.711 Message: lib/eventdev: Defining dependency "eventdev" 00:01:12.711 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:12.711 Message: lib/gpudev: Defining dependency "gpudev" 00:01:12.711 Message: lib/gro: Defining dependency "gro" 00:01:12.711 Message: lib/gso: Defining dependency "gso" 00:01:12.711 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:12.711 Message: lib/jobstats: Defining dependency "jobstats" 00:01:12.711 Message: lib/latencystats: Defining dependency "latencystats" 00:01:12.711 Message: lib/lpm: Defining dependency "lpm" 00:01:12.711 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.711 Fetching value of define "__AVX512IFMA__" : 1 00:01:12.711 Message: lib/member: Defining dependency "member" 00:01:12.711 Message: lib/pcapng: Defining dependency "pcapng" 00:01:12.711 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:12.711 Message: lib/power: Defining dependency "power" 00:01:12.711 Message: lib/rawdev: Defining dependency "rawdev" 00:01:12.711 Message: lib/regexdev: Defining dependency "regexdev" 00:01:12.711 Message: lib/mldev: Defining dependency "mldev" 00:01:12.711 Message: lib/rib: Defining dependency "rib" 00:01:12.711 Message: lib/reorder: Defining dependency "reorder" 00:01:12.711 Message: lib/sched: Defining dependency "sched" 00:01:12.711 Message: lib/security: Defining dependency "security" 00:01:12.712 Message: lib/stack: Defining dependency "stack" 00:01:12.712 Has header "linux/userfaultfd.h" : YES 00:01:12.712 Has header "linux/vduse.h" : YES 00:01:12.712 Message: lib/vhost: Defining dependency "vhost" 00:01:12.712 Message: lib/ipsec: Defining dependency "ipsec" 00:01:12.712 Message: lib/pdcp: Defining dependency "pdcp" 00:01:12.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.712 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.712 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.712 Message: lib/fib: Defining dependency "fib" 00:01:12.712 Message: lib/port: Defining dependency "port" 00:01:12.712 Message: lib/pdump: Defining dependency "pdump" 00:01:12.712 Message: lib/table: Defining dependency "table" 00:01:12.712 Message: lib/pipeline: Defining dependency "pipeline" 00:01:12.712 Message: lib/graph: Defining dependency "graph" 00:01:12.712 Message: lib/node: Defining dependency "node" 00:01:12.712 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:12.712 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:12.712 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:13.656 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:13.656 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:13.656 Compiler for C supports arguments -Wno-unused-value: YES 00:01:13.656 Compiler for C supports arguments -Wno-format: YES 00:01:13.656 Compiler for C supports arguments -Wno-format-security: YES 00:01:13.656 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:13.656 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:13.656 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:13.656 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:13.656 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:13.656 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:13.656 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:13.656 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:13.656 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:13.656 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:13.656 Has header "sys/epoll.h" : YES 00:01:13.656 Program doxygen found: YES (/usr/bin/doxygen) 00:01:13.656 Configuring doxy-api-html.conf using configuration 00:01:13.656 Configuring doxy-api-man.conf using configuration 00:01:13.656 Program mandb found: YES (/usr/bin/mandb) 00:01:13.656 Program sphinx-build found: NO 00:01:13.656 Configuring rte_build_config.h using configuration 00:01:13.656 Message: 00:01:13.656 ================= 00:01:13.656 Applications Enabled 00:01:13.656 ================= 00:01:13.656 00:01:13.656 apps: 00:01:13.656 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:13.656 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:13.656 test-pmd, test-regex, test-sad, test-security-perf, 00:01:13.656 00:01:13.656 Message: 00:01:13.656 ================= 00:01:13.656 Libraries Enabled 00:01:13.656 ================= 00:01:13.656 00:01:13.656 libs: 00:01:13.656 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:13.656 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:13.656 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:13.656 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:13.656 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:13.656 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:13.656 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:13.656 00:01:13.656 00:01:13.656 Message: 00:01:13.656 =============== 00:01:13.656 Drivers Enabled 00:01:13.656 =============== 00:01:13.656 00:01:13.656 common: 00:01:13.656 00:01:13.656 bus: 00:01:13.656 pci, vdev, 00:01:13.656 mempool: 00:01:13.656 ring, 00:01:13.656 dma: 00:01:13.656 00:01:13.656 net: 00:01:13.656 i40e, 00:01:13.656 raw: 00:01:13.656 00:01:13.656 crypto: 00:01:13.656 00:01:13.656 compress: 00:01:13.656 00:01:13.656 regex: 00:01:13.656 00:01:13.656 ml: 00:01:13.656 00:01:13.656 vdpa: 00:01:13.656 00:01:13.656 event: 00:01:13.656 00:01:13.656 baseband: 00:01:13.656 00:01:13.656 gpu: 00:01:13.656 00:01:13.656 00:01:13.656 Message: 00:01:13.656 ================= 00:01:13.656 Content Skipped 00:01:13.656 ================= 00:01:13.656 00:01:13.656 apps: 00:01:13.656 00:01:13.656 libs: 00:01:13.656 00:01:13.656 drivers: 00:01:13.656 common/cpt: not in enabled drivers build config 00:01:13.656 common/dpaax: not in enabled drivers build config 00:01:13.656 common/iavf: not in enabled drivers build config 00:01:13.656 common/idpf: not in enabled drivers build config 00:01:13.656 common/mvep: not in enabled drivers build config 00:01:13.656 common/octeontx: not in enabled drivers build config 00:01:13.656 bus/auxiliary: not in enabled drivers build config 00:01:13.656 bus/cdx: not in enabled drivers build config 00:01:13.656 bus/dpaa: not in enabled drivers build config 00:01:13.656 bus/fslmc: not in enabled drivers build config 00:01:13.656 bus/ifpga: not in enabled drivers build config 00:01:13.656 bus/platform: not in enabled drivers build config 00:01:13.656 bus/vmbus: not in enabled drivers build config 00:01:13.656 common/cnxk: not in enabled drivers build config 00:01:13.656 common/mlx5: not in enabled drivers build config 00:01:13.656 common/nfp: not in enabled drivers build config 00:01:13.656 common/qat: not in enabled drivers build config 00:01:13.656 common/sfc_efx: not in enabled drivers build config 00:01:13.656 mempool/bucket: not in enabled drivers build config 00:01:13.656 mempool/cnxk: not in enabled drivers build config 00:01:13.656 mempool/dpaa: not in enabled drivers build config 00:01:13.656 mempool/dpaa2: not in enabled drivers build config 00:01:13.656 mempool/octeontx: not in enabled drivers build config 00:01:13.656 mempool/stack: not in enabled drivers build config 00:01:13.656 dma/cnxk: not in enabled drivers build config 00:01:13.656 dma/dpaa: not in enabled drivers build config 00:01:13.656 dma/dpaa2: not in enabled drivers build config 00:01:13.656 dma/hisilicon: not in enabled drivers build config 00:01:13.656 dma/idxd: not in enabled drivers build config 00:01:13.656 dma/ioat: not in enabled drivers build config 00:01:13.656 dma/skeleton: not in enabled drivers build config 00:01:13.656 net/af_packet: not in enabled drivers build config 00:01:13.656 net/af_xdp: not in enabled drivers build config 00:01:13.656 net/ark: not in enabled drivers build config 00:01:13.656 net/atlantic: not in enabled drivers build config 00:01:13.656 net/avp: not in enabled drivers build config 00:01:13.656 net/axgbe: not in enabled drivers build config 00:01:13.656 net/bnx2x: not in enabled drivers build config 00:01:13.656 net/bnxt: not in enabled drivers build config 00:01:13.656 net/bonding: not in enabled drivers build config 00:01:13.656 net/cnxk: not in enabled drivers build config 00:01:13.656 net/cpfl: not in enabled drivers build config 00:01:13.656 net/cxgbe: not in enabled drivers build config 00:01:13.656 net/dpaa: not in enabled drivers build config 00:01:13.656 net/dpaa2: not in enabled drivers build config 00:01:13.656 net/e1000: not in enabled drivers build config 00:01:13.656 net/ena: not in enabled drivers build config 00:01:13.656 net/enetc: not in enabled drivers build config 00:01:13.656 net/enetfec: not in enabled drivers build config 00:01:13.656 net/enic: not in enabled drivers build config 00:01:13.656 net/failsafe: not in enabled drivers build config 00:01:13.656 net/fm10k: not in enabled drivers build config 00:01:13.656 net/gve: not in enabled drivers build config 00:01:13.656 net/hinic: not in enabled drivers build config 00:01:13.656 net/hns3: not in enabled drivers build config 00:01:13.656 net/iavf: not in enabled drivers build config 00:01:13.656 net/ice: not in enabled drivers build config 00:01:13.656 net/idpf: not in enabled drivers build config 00:01:13.656 net/igc: not in enabled drivers build config 00:01:13.656 net/ionic: not in enabled drivers build config 00:01:13.656 net/ipn3ke: not in enabled drivers build config 00:01:13.656 net/ixgbe: not in enabled drivers build config 00:01:13.656 net/mana: not in enabled drivers build config 00:01:13.656 net/memif: not in enabled drivers build config 00:01:13.656 net/mlx4: not in enabled drivers build config 00:01:13.656 net/mlx5: not in enabled drivers build config 00:01:13.656 net/mvneta: not in enabled drivers build config 00:01:13.656 net/mvpp2: not in enabled drivers build config 00:01:13.656 net/netvsc: not in enabled drivers build config 00:01:13.656 net/nfb: not in enabled drivers build config 00:01:13.656 net/nfp: not in enabled drivers build config 00:01:13.656 net/ngbe: not in enabled drivers build config 00:01:13.656 net/null: not in enabled drivers build config 00:01:13.656 net/octeontx: not in enabled drivers build config 00:01:13.656 net/octeon_ep: not in enabled drivers build config 00:01:13.656 net/pcap: not in enabled drivers build config 00:01:13.656 net/pfe: not in enabled drivers build config 00:01:13.656 net/qede: not in enabled drivers build config 00:01:13.657 net/ring: not in enabled drivers build config 00:01:13.657 net/sfc: not in enabled drivers build config 00:01:13.657 net/softnic: not in enabled drivers build config 00:01:13.657 net/tap: not in enabled drivers build config 00:01:13.657 net/thunderx: not in enabled drivers build config 00:01:13.657 net/txgbe: not in enabled drivers build config 00:01:13.657 net/vdev_netvsc: not in enabled drivers build config 00:01:13.657 net/vhost: not in enabled drivers build config 00:01:13.657 net/virtio: not in enabled drivers build config 00:01:13.657 net/vmxnet3: not in enabled drivers build config 00:01:13.657 raw/cnxk_bphy: not in enabled drivers build config 00:01:13.657 raw/cnxk_gpio: not in enabled drivers build config 00:01:13.657 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:13.657 raw/ifpga: not in enabled drivers build config 00:01:13.657 raw/ntb: not in enabled drivers build config 00:01:13.657 raw/skeleton: not in enabled drivers build config 00:01:13.657 crypto/armv8: not in enabled drivers build config 00:01:13.657 crypto/bcmfs: not in enabled drivers build config 00:01:13.657 crypto/caam_jr: not in enabled drivers build config 00:01:13.657 crypto/ccp: not in enabled drivers build config 00:01:13.657 crypto/cnxk: not in enabled drivers build config 00:01:13.657 crypto/dpaa_sec: not in enabled drivers build config 00:01:13.657 crypto/dpaa2_sec: not in enabled drivers build config 00:01:13.657 crypto/ipsec_mb: not in enabled drivers build config 00:01:13.657 crypto/mlx5: not in enabled drivers build config 00:01:13.657 crypto/mvsam: not in enabled drivers build config 00:01:13.657 crypto/nitrox: not in enabled drivers build config 00:01:13.657 crypto/null: not in enabled drivers build config 00:01:13.657 crypto/octeontx: not in enabled drivers build config 00:01:13.657 crypto/openssl: not in enabled drivers build config 00:01:13.657 crypto/scheduler: not in enabled drivers build config 00:01:13.657 crypto/uadk: not in enabled drivers build config 00:01:13.657 crypto/virtio: not in enabled drivers build config 00:01:13.657 compress/isal: not in enabled drivers build config 00:01:13.657 compress/mlx5: not in enabled drivers build config 00:01:13.657 compress/octeontx: not in enabled drivers build config 00:01:13.657 compress/zlib: not in enabled drivers build config 00:01:13.657 regex/mlx5: not in enabled drivers build config 00:01:13.657 regex/cn9k: not in enabled drivers build config 00:01:13.657 ml/cnxk: not in enabled drivers build config 00:01:13.657 vdpa/ifc: not in enabled drivers build config 00:01:13.657 vdpa/mlx5: not in enabled drivers build config 00:01:13.657 vdpa/nfp: not in enabled drivers build config 00:01:13.657 vdpa/sfc: not in enabled drivers build config 00:01:13.657 event/cnxk: not in enabled drivers build config 00:01:13.657 event/dlb2: not in enabled drivers build config 00:01:13.657 event/dpaa: not in enabled drivers build config 00:01:13.657 event/dpaa2: not in enabled drivers build config 00:01:13.657 event/dsw: not in enabled drivers build config 00:01:13.657 event/opdl: not in enabled drivers build config 00:01:13.657 event/skeleton: not in enabled drivers build config 00:01:13.657 event/sw: not in enabled drivers build config 00:01:13.657 event/octeontx: not in enabled drivers build config 00:01:13.657 baseband/acc: not in enabled drivers build config 00:01:13.657 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:13.657 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:13.657 baseband/la12xx: not in enabled drivers build config 00:01:13.657 baseband/null: not in enabled drivers build config 00:01:13.657 baseband/turbo_sw: not in enabled drivers build config 00:01:13.657 gpu/cuda: not in enabled drivers build config 00:01:13.657 00:01:13.657 00:01:13.657 Build targets in project: 215 00:01:13.657 00:01:13.657 DPDK 23.11.0 00:01:13.657 00:01:13.657 User defined options 00:01:13.657 libdir : lib 00:01:13.657 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.657 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:13.657 c_link_args : 00:01:13.657 enable_docs : false 00:01:13.657 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:13.657 enable_kmods : false 00:01:13.657 machine : native 00:01:13.657 tests : false 00:01:13.657 00:01:13.657 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.657 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:13.918 01:19:40 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:13.918 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:14.183 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:14.183 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:14.183 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:14.183 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:14.183 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:14.183 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:14.183 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:14.183 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:14.183 [9/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:14.183 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:14.183 [11/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:14.183 [12/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:14.183 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:14.183 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:14.183 [15/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:14.183 [16/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:14.183 [17/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:14.183 [18/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:14.183 [19/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:14.183 [20/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:14.183 [21/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:14.183 [22/705] Linking static target lib/librte_kvargs.a 00:01:14.183 [23/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:14.442 [24/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:14.442 [25/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:14.442 [26/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:14.442 [27/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:14.442 [28/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:14.442 [29/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:14.442 [30/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:14.442 [31/705] Linking static target lib/librte_pci.a 00:01:14.442 [32/705] Linking static target lib/librte_log.a 00:01:14.442 [33/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:14.442 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:14.442 [35/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:14.442 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:14.702 [37/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.702 [38/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.702 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:14.702 [40/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:14.702 [41/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:14.702 [42/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:14.702 [43/705] Linking static target lib/librte_cfgfile.a 00:01:14.702 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:14.702 [45/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:14.702 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:14.702 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:14.702 [48/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:14.702 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:14.702 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:14.702 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:14.702 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:14.702 [53/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:14.702 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:14.702 [55/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:14.702 [56/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:14.702 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:14.702 [58/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:14.702 [59/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:14.702 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:14.702 [61/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:14.702 [62/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:14.702 [63/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:14.702 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:14.702 [65/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:14.702 [66/705] Linking static target lib/librte_meter.a 00:01:14.965 [67/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:14.965 [68/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:14.965 [69/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:14.965 [70/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:14.965 [71/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:14.965 [72/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:14.965 [73/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:14.965 [74/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:14.965 [75/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:14.965 [76/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:14.965 [77/705] Linking static target lib/librte_ring.a 00:01:14.965 [78/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:14.965 [79/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:14.965 [80/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:14.965 [81/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:14.965 [82/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:14.965 [83/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:14.965 [84/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:14.965 [85/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:14.966 [86/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:14.966 [87/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:14.966 [88/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:14.966 [89/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:14.966 [90/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:14.966 [91/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:14.966 [92/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:14.966 [93/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:14.966 [94/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:14.966 [95/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:14.966 [96/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:14.966 [97/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:14.966 [98/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:14.966 [99/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:14.966 [100/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:14.966 [101/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:14.966 [102/705] Linking static target lib/librte_cmdline.a 00:01:14.966 [103/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:14.966 [104/705] Linking static target lib/librte_metrics.a 00:01:14.966 [105/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:14.966 [106/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:14.966 [107/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:14.966 [108/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:14.966 [109/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:14.966 [110/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:14.966 [111/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:14.966 [112/705] Linking static target lib/librte_bitratestats.a 00:01:14.966 [113/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:14.966 [114/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:14.966 [115/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:14.966 [116/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:14.966 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:14.966 [118/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:14.966 [119/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:14.966 [120/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:14.966 [121/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:14.966 [122/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:14.966 [123/705] Linking static target lib/librte_net.a 00:01:15.224 [124/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:15.224 [125/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.224 [126/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:15.224 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:15.224 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:15.224 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:15.224 [130/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:15.224 [131/705] Linking target lib/librte_log.so.24.0 00:01:15.224 [132/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:15.224 [133/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:15.224 [134/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:15.224 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:15.224 [136/705] Linking static target lib/librte_compressdev.a 00:01:15.224 [137/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.224 [138/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:15.224 [139/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:15.224 [140/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:15.224 [141/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:15.224 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:15.224 [143/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:15.224 [144/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:15.224 [145/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.224 [146/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:15.224 [147/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:15.224 [148/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.224 [149/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:15.224 [150/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:15.224 [151/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.224 [152/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:15.224 [153/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:15.224 [154/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:15.224 [155/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:15.224 [156/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:15.224 [157/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:15.224 [158/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:15.224 [159/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:15.224 [160/705] Linking static target lib/librte_timer.a 00:01:15.485 [161/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:15.485 [162/705] Linking static target lib/librte_dispatcher.a 00:01:15.485 [163/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:15.485 [164/705] Linking target lib/librte_kvargs.so.24.0 00:01:15.485 [165/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:15.485 [166/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:15.485 [167/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:15.485 [168/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:15.485 [169/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:15.485 [170/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:15.485 [171/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:15.485 [172/705] Linking static target lib/librte_jobstats.a 00:01:15.485 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:15.485 [174/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:15.485 [175/705] Linking static target lib/librte_bbdev.a 00:01:15.485 [176/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:15.485 [177/705] Linking static target lib/librte_gpudev.a 00:01:15.485 [178/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.485 [179/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:15.485 [180/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:15.485 [181/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:15.485 [182/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:15.485 [183/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:15.486 [184/705] Linking static target lib/librte_dmadev.a 00:01:15.486 [185/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:15.486 [186/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:15.486 [187/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:15.486 [188/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:15.486 [189/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:15.486 [190/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:15.486 [191/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:15.486 [192/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:15.486 [193/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:15.486 [194/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:15.486 [195/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:15.486 [196/705] Linking static target lib/librte_mempool.a 00:01:15.486 [197/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:15.486 [198/705] Linking static target lib/librte_gro.a 00:01:15.486 [199/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.486 [200/705] Linking static target lib/librte_distributor.a 00:01:15.486 [201/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:15.486 [202/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:15.486 [203/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:15.486 [204/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:15.486 [205/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:15.486 [206/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:15.486 [207/705] Linking static target lib/librte_stack.a 00:01:15.486 [208/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:15.486 [209/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:15.746 [210/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:15.746 [211/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:15.746 [212/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:15.746 [213/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:15.746 [214/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:15.746 [215/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:15.746 [216/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:15.746 [217/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:15.746 [218/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:15.746 [219/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:15.746 [220/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:15.746 [221/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:15.746 [222/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:15.746 [223/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:15.746 [224/705] Linking static target lib/librte_latencystats.a 00:01:15.746 [225/705] Linking static target lib/librte_regexdev.a 00:01:15.746 [226/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:15.746 [227/705] Linking static target lib/librte_gso.a 00:01:15.746 [228/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:15.746 [229/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:15.746 [230/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:15.746 [231/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:15.746 [232/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:15.746 [233/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:15.746 [234/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:15.746 [235/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:15.746 [236/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:15.746 [237/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:15.746 [238/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:15.746 [239/705] Linking static target lib/librte_telemetry.a 00:01:15.746 [240/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:15.747 [241/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:15.747 [242/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:15.747 [243/705] Linking static target lib/librte_eal.a 00:01:15.747 [244/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:15.747 [245/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:15.747 [246/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:15.747 [247/705] Linking static target lib/librte_mldev.a 00:01:15.747 [248/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:15.747 [249/705] Linking static target lib/librte_reorder.a 00:01:15.747 [250/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:15.747 [251/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:16.010 [252/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:16.010 [253/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [254/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:16.010 [255/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:16.010 [256/705] Linking static target lib/librte_rcu.a 00:01:16.010 [257/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [258/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:16.010 [259/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:16.010 [260/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:16.010 [261/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:16.010 [262/705] Linking static target lib/librte_rawdev.a 00:01:16.010 [263/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:16.010 [264/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [265/705] Linking static target lib/librte_ip_frag.a 00:01:16.010 [266/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [267/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [268/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:16.010 [269/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:16.010 [270/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:16.010 [271/705] Linking static target lib/librte_security.a 00:01:16.010 [272/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:16.010 [273/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:16.010 [274/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [275/705] Linking static target lib/librte_pcapng.a 00:01:16.010 [276/705] Linking static target lib/librte_bpf.a 00:01:16.010 [277/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:16.010 [278/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [279/705] Linking static target lib/librte_power.a 00:01:16.010 [280/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [281/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:16.010 [282/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [283/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:16.010 [284/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:16.010 [285/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:16.010 [286/705] Linking static target lib/librte_mbuf.a 00:01:16.010 [287/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:16.010 [288/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:16.010 [289/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:16.010 [290/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.010 [291/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:16.010 [292/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:16.010 [293/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:16.010 [294/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:16.010 [295/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:16.275 [296/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:16.275 [297/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:16.275 [298/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:16.275 [299/705] Linking static target lib/librte_rib.a 00:01:16.275 [300/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:16.275 [301/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:16.275 [302/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:16.275 [303/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:16.275 [304/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.275 [305/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:16.275 [306/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:16.275 [307/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:16.275 [308/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:16.275 [309/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:16.275 [310/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:16.275 [311/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:16.275 [312/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:16.275 [313/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.275 [314/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:16.275 [315/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:16.275 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:16.275 [317/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:16.275 [318/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:16.275 [319/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:16.275 [320/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:16.275 [321/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:16.275 [322/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:16.275 [323/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:16.275 [324/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:16.275 [325/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.275 [326/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:16.275 [327/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:16.275 [328/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.275 [329/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:16.275 [330/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:16.275 [331/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.275 [332/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:16.275 [333/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:16.276 [334/705] Linking static target lib/librte_lpm.a 00:01:16.276 [335/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.534 [336/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:16.534 [337/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:16.534 [338/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.534 [339/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:16.534 [340/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:16.534 [341/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:16.534 [342/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:16.534 [343/705] Linking static target lib/librte_efd.a 00:01:16.534 [344/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:16.534 [345/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:16.534 [346/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:16.534 [347/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.534 [348/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:16.534 [349/705] Linking target lib/librte_telemetry.so.24.0 00:01:16.534 [350/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:16.534 [351/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:16.534 [352/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:16.534 [353/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:16.534 [354/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:16.534 [355/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.534 [356/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:16.534 [357/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:16.534 [358/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:16.535 [359/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:16.535 [360/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:16.535 [361/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:16.535 [362/705] Linking static target lib/librte_fib.a 00:01:16.535 [363/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:16.535 [364/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:16.535 [365/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.535 [366/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:16.535 [367/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:16.535 [368/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:16.535 [369/705] Linking static target lib/librte_graph.a 00:01:16.795 [370/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:16.795 [371/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.795 [372/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:16.795 [373/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:16.795 [374/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:16.795 [375/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:16.795 [376/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:16.795 [377/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:16.795 [378/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:16.795 [379/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:16.795 [380/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:16.795 [381/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:16.795 [382/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:16.795 [383/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:16.795 [384/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:16.795 [385/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:16.795 [386/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:16.795 [387/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:16.795 [388/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:16.795 [389/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:16.795 [390/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.795 [391/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:16.795 [392/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:16.795 [393/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:16.795 [394/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:16.795 [395/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:16.795 [396/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:16.795 [397/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:16.795 [398/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:16.795 [399/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:16.795 [400/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:16.795 [401/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:16.795 [402/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.795 [403/705] Linking static target drivers/librte_bus_vdev.a 00:01:16.795 [404/705] Linking static target lib/librte_pdump.a 00:01:16.795 [405/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:16.796 [406/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.796 [407/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.796 [408/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:16.796 [409/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:17.053 [410/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.053 [411/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:17.053 [412/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.053 [413/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:17.053 [414/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:17.053 [415/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:17.053 [416/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:17.053 [417/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:17.053 [418/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:17.053 [419/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:17.053 [420/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:17.053 [421/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:17.053 [422/705] Linking static target lib/librte_table.a 00:01:17.053 [423/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:17.053 [424/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.053 [425/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:17.053 [426/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:17.053 [427/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:17.053 [428/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:17.053 [429/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.053 [430/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.053 [431/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:17.053 [432/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:17.053 [433/705] Linking static target drivers/librte_bus_pci.a 00:01:17.053 [434/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:17.053 [435/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:17.053 [436/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:17.053 [437/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.053 [438/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:17.053 [439/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:17.053 [440/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:17.053 [441/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:17.053 [442/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:17.053 [443/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:17.053 [444/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:17.053 [445/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:17.053 [446/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:17.053 [447/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.053 [448/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:17.053 [449/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.053 [450/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:17.053 [451/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:17.053 [452/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:17.312 [453/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:17.312 [454/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:17.312 [455/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:17.312 [456/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:17.312 [457/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:17.312 [458/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:17.312 [459/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:17.312 [460/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:17.312 [461/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:17.312 [462/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:17.312 [463/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:17.312 [464/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:17.312 [465/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:17.312 [466/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:17.312 [467/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:17.312 [468/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:17.312 [469/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:17.312 [470/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:17.312 [471/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:17.312 [472/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:17.312 [473/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:17.312 [474/705] Linking static target lib/librte_sched.a 00:01:17.312 [475/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:17.312 [476/705] Linking static target lib/librte_ipsec.a 00:01:17.312 [477/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:17.312 [478/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:17.312 [479/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:17.312 [480/705] Linking static target lib/librte_cryptodev.a 00:01:17.312 [481/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.312 [482/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.312 [483/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:17.312 [484/705] Linking static target drivers/librte_mempool_ring.a 00:01:17.312 [485/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:17.312 [486/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:17.312 [487/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:17.312 [488/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:17.312 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:17.312 [490/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:17.312 [491/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:17.312 [492/705] Linking static target lib/librte_node.a 00:01:17.312 [493/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:17.312 [494/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:17.312 [495/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:17.312 [496/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:17.312 [497/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.312 [498/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:17.312 [499/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:17.574 [500/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:17.574 [501/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:17.574 [502/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:17.574 [503/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:17.574 [504/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:17.574 [505/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:17.574 [506/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:17.574 [507/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:17.574 [508/705] Linking static target lib/librte_pdcp.a 00:01:17.574 [509/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:17.574 [510/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:17.574 [511/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:17.574 [512/705] Linking static target lib/librte_member.a 00:01:17.574 [513/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:17.574 [514/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:17.574 [515/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:17.574 [516/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:17.574 [517/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.574 [518/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:17.574 [519/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:17.574 [520/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:17.574 [521/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:17.574 [522/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:17.574 [523/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.574 [524/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:17.574 [525/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:17.574 [526/705] Linking static target lib/librte_hash.a 00:01:17.574 [527/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:17.574 [528/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:17.835 [529/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.835 [530/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:17.835 [531/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:17.835 [532/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:17.835 [533/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:17.835 [534/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:17.835 [535/705] Linking static target lib/acl/libavx2_tmp.a 00:01:17.835 [536/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:17.835 [537/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:17.835 [538/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:17.835 [539/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:17.835 [540/705] Linking static target lib/librte_port.a 00:01:17.835 [541/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.835 [542/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:17.835 [543/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.835 [544/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.835 [545/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:17.835 [546/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:17.835 [547/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:17.835 [548/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:17.835 [549/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.836 [550/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:17.836 [551/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:17.836 [552/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:17.836 [553/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:17.836 [554/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:17.836 [555/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:17.836 [556/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:17.836 [557/705] Linking static target lib/librte_eventdev.a 00:01:17.836 [558/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.096 [559/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:18.096 [560/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:18.096 [561/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:18.096 [562/705] Linking static target lib/librte_acl.a 00:01:18.096 [563/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:18.096 [564/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:18.096 [565/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:18.357 [566/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:18.357 [567/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.357 [568/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:18.617 [569/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.617 [570/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.617 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:18.617 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:18.878 [573/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:18.878 [574/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:18.878 [575/705] Linking static target lib/librte_ethdev.a 00:01:19.138 [576/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:19.138 [577/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:19.400 [578/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.400 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:19.661 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:19.661 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:19.922 [582/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:19.922 [583/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:19.922 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:19.922 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:19.922 [586/705] Linking static target drivers/librte_net_i40e.a 00:01:20.864 [587/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.864 [588/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:21.435 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:21.435 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.644 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:25.644 [592/705] Linking static target lib/librte_pipeline.a 00:01:26.587 [593/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.587 [594/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:26.587 [595/705] Linking static target lib/librte_vhost.a 00:01:26.845 [596/705] Linking target lib/librte_eal.so.24.0 00:01:26.845 [597/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:26.845 [598/705] Linking target lib/librte_ring.so.24.0 00:01:26.845 [599/705] Linking target lib/librte_jobstats.so.24.0 00:01:26.845 [600/705] Linking target lib/librte_meter.so.24.0 00:01:26.845 [601/705] Linking target lib/librte_timer.so.24.0 00:01:26.845 [602/705] Linking target lib/librte_dmadev.so.24.0 00:01:26.845 [603/705] Linking target lib/librte_cfgfile.so.24.0 00:01:26.845 [604/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:26.845 [605/705] Linking target lib/librte_pci.so.24.0 00:01:26.845 [606/705] Linking target lib/librte_stack.so.24.0 00:01:26.845 [607/705] Linking target lib/librte_rawdev.so.24.0 00:01:26.845 [608/705] Linking target lib/librte_acl.so.24.0 00:01:27.106 [609/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:27.106 [610/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:27.106 [611/705] Linking target app/dpdk-test-sad 00:01:27.106 [612/705] Linking target app/dpdk-dumpcap 00:01:27.106 [613/705] Linking target app/dpdk-pdump 00:01:27.106 [614/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:27.106 [615/705] Linking target app/dpdk-test-security-perf 00:01:27.106 [616/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:27.106 [617/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:27.106 [618/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:27.106 [619/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:27.106 [620/705] Linking target app/dpdk-test-eventdev 00:01:27.106 [621/705] Linking target lib/librte_mempool.so.24.0 00:01:27.106 [622/705] Linking target lib/librte_rcu.so.24.0 00:01:27.106 [623/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:27.106 [624/705] Linking target app/dpdk-test-acl 00:01:27.106 [625/705] Linking target app/dpdk-test-regex 00:01:27.106 [626/705] Linking target app/dpdk-proc-info 00:01:27.106 [627/705] Linking target app/dpdk-test-fib 00:01:27.106 [628/705] Linking target app/dpdk-test-cmdline 00:01:27.106 [629/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.106 [630/705] Linking target app/dpdk-graph 00:01:27.106 [631/705] Linking target app/dpdk-test-dma-perf 00:01:27.106 [632/705] Linking target app/dpdk-test-compress-perf 00:01:27.106 [633/705] Linking target app/dpdk-test-gpudev 00:01:27.106 [634/705] Linking target app/dpdk-test-flow-perf 00:01:27.106 [635/705] Linking target app/dpdk-test-pipeline 00:01:27.106 [636/705] Linking target app/dpdk-test-mldev 00:01:27.106 [637/705] Linking target app/dpdk-test-bbdev 00:01:27.106 [638/705] Linking target app/dpdk-test-crypto-perf 00:01:27.106 [639/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:27.106 [640/705] Linking target app/dpdk-testpmd 00:01:27.106 [641/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:27.106 [642/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:27.367 [643/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:27.367 [644/705] Linking target lib/librte_mbuf.so.24.0 00:01:27.367 [645/705] Linking target lib/librte_rib.so.24.0 00:01:27.367 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:27.367 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:27.367 [648/705] Linking target lib/librte_bbdev.so.24.0 00:01:27.367 [649/705] Linking target lib/librte_regexdev.so.24.0 00:01:27.367 [650/705] Linking target lib/librte_net.so.24.0 00:01:27.367 [651/705] Linking target lib/librte_reorder.so.24.0 00:01:27.367 [652/705] Linking target lib/librte_compressdev.so.24.0 00:01:27.367 [653/705] Linking target lib/librte_mldev.so.24.0 00:01:27.367 [654/705] Linking target lib/librte_gpudev.so.24.0 00:01:27.367 [655/705] Linking target lib/librte_distributor.so.24.0 00:01:27.367 [656/705] Linking target lib/librte_sched.so.24.0 00:01:27.367 [657/705] Linking target lib/librte_cryptodev.so.24.0 00:01:27.367 [658/705] Linking target lib/librte_fib.so.24.0 00:01:27.628 [659/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:27.628 [660/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:27.628 [661/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:27.628 [662/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:27.628 [663/705] Linking target lib/librte_cmdline.so.24.0 00:01:27.628 [664/705] Linking target lib/librte_hash.so.24.0 00:01:27.628 [665/705] Linking target lib/librte_ethdev.so.24.0 00:01:27.628 [666/705] Linking target lib/librte_security.so.24.0 00:01:27.889 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:27.889 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:27.889 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:27.889 [670/705] Linking target lib/librte_lpm.so.24.0 00:01:27.890 [671/705] Linking target lib/librte_efd.so.24.0 00:01:27.890 [672/705] Linking target lib/librte_member.so.24.0 00:01:27.890 [673/705] Linking target lib/librte_metrics.so.24.0 00:01:27.890 [674/705] Linking target lib/librte_ipsec.so.24.0 00:01:27.890 [675/705] Linking target lib/librte_pdcp.so.24.0 00:01:27.890 [676/705] Linking target lib/librte_gso.so.24.0 00:01:27.890 [677/705] Linking target lib/librte_eventdev.so.24.0 00:01:27.890 [678/705] Linking target lib/librte_gro.so.24.0 00:01:27.890 [679/705] Linking target lib/librte_bpf.so.24.0 00:01:27.890 [680/705] Linking target lib/librte_pcapng.so.24.0 00:01:27.890 [681/705] Linking target lib/librte_ip_frag.so.24.0 00:01:27.890 [682/705] Linking target lib/librte_power.so.24.0 00:01:27.890 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:27.890 [684/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:27.890 [685/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:27.890 [686/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:27.890 [687/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:27.890 [688/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:27.890 [689/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:28.151 [690/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:28.151 [691/705] Linking target lib/librte_latencystats.so.24.0 00:01:28.151 [692/705] Linking target lib/librte_dispatcher.so.24.0 00:01:28.151 [693/705] Linking target lib/librte_bitratestats.so.24.0 00:01:28.151 [694/705] Linking target lib/librte_graph.so.24.0 00:01:28.151 [695/705] Linking target lib/librte_port.so.24.0 00:01:28.151 [696/705] Linking target lib/librte_pdump.so.24.0 00:01:28.151 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:28.151 [698/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:28.151 [699/705] Linking target lib/librte_node.so.24.0 00:01:28.412 [700/705] Linking target lib/librte_table.so.24.0 00:01:28.412 [701/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:28.672 [702/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.934 [703/705] Linking target lib/librte_vhost.so.24.0 00:01:30.854 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.854 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:30.854 01:19:56 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:30.854 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:30.854 [0/1] Installing files. 00:01:30.854 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:30.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:30.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:30.859 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.121 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.122 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:31.388 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:31.388 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:31.388 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.388 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:31.388 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:31.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:31.393 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:31.393 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:31.393 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:31.393 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:31.393 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:31.393 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:31.393 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:31.393 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:31.393 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:31.393 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:31.393 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:31.393 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:31.393 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:31.393 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:31.393 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:31.393 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:31.393 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:31.393 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:31.393 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:31.393 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:31.393 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:31.393 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:31.393 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:31.393 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:31.393 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:31.393 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:31.394 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:31.394 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:31.394 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:31.394 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:31.394 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:31.394 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:31.394 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:31.394 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:31.394 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:31.394 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:31.394 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:31.394 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:31.394 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:31.394 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:31.394 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:31.394 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:31.394 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:31.394 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:31.394 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:31.394 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:31.394 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:31.394 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:31.394 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:31.394 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:31.394 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:31.394 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:31.394 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:31.394 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:31.394 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:31.394 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:31.394 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:31.394 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:31.394 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:31.394 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:31.394 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:31.394 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:31.394 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:31.394 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:31.394 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:31.394 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:31.394 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:31.394 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:31.394 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:31.394 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:31.394 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:31.394 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:31.394 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:31.394 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:31.394 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:31.394 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:31.394 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:31.394 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:31.394 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:31.394 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:31.394 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:31.394 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:31.394 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:31.394 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:31.394 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:31.394 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:31.394 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:31.394 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:31.394 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:31.394 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:31.394 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:31.394 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:31.394 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:31.394 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:31.394 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:31.394 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:31.394 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:31.394 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:31.394 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:31.394 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:31.394 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:31.394 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:31.394 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:31.394 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:31.394 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:31.394 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:31.394 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:31.394 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:31.395 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:31.395 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:31.395 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:31.395 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:31.395 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:31.395 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:31.395 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:31.395 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:31.395 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:31.395 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:31.395 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:31.395 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:31.395 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:31.395 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:31.395 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:31.395 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:31.395 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:31.395 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:31.395 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:31.395 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:31.395 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:31.395 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:31.395 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:31.395 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:31.395 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:31.395 01:19:57 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:01:31.395 01:19:57 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:31.395 01:19:57 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:01:31.395 01:19:57 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.395 00:01:31.395 real 0m24.106s 00:01:31.395 user 7m8.057s 00:01:31.395 sys 2m44.606s 00:01:31.395 01:19:57 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:31.395 01:19:57 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:31.395 ************************************ 00:01:31.395 END TEST build_native_dpdk 00:01:31.395 ************************************ 00:01:31.395 01:19:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.395 01:19:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.395 01:19:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.395 01:19:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.395 01:19:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.395 01:19:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.395 01:19:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.395 01:19:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:31.656 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:31.656 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:31.656 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:31.656 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:32.229 Using 'verbs' RDMA provider 00:01:47.784 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:00.025 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:00.025 Creating mk/config.mk...done. 00:02:00.025 Creating mk/cc.flags.mk...done. 00:02:00.025 Type 'make' to build. 00:02:00.025 01:20:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:00.025 01:20:25 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:00.025 01:20:25 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:00.025 01:20:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.025 ************************************ 00:02:00.025 START TEST make 00:02:00.025 ************************************ 00:02:00.025 01:20:25 make -- common/autotest_common.sh@1121 -- $ make -j144 00:02:00.025 make[1]: Nothing to be done for 'all'. 00:02:00.963 The Meson build system 00:02:00.963 Version: 1.3.1 00:02:00.963 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:00.963 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:00.963 Build type: native build 00:02:00.963 Project name: libvfio-user 00:02:00.963 Project version: 0.0.1 00:02:00.963 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.963 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:00.963 Host machine cpu family: x86_64 00:02:00.963 Host machine cpu: x86_64 00:02:00.963 Run-time dependency threads found: YES 00:02:00.963 Library dl found: YES 00:02:00.963 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.963 Run-time dependency json-c found: YES 0.17 00:02:00.963 Run-time dependency cmocka found: YES 1.1.7 00:02:00.963 Program pytest-3 found: NO 00:02:00.963 Program flake8 found: NO 00:02:00.963 Program misspell-fixer found: NO 00:02:00.963 Program restructuredtext-lint found: NO 00:02:00.963 Program valgrind found: YES (/usr/bin/valgrind) 00:02:00.963 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.963 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.963 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.963 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:00.963 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:00.963 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:00.963 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:00.963 Build targets in project: 8 00:02:00.963 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:00.963 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:00.963 00:02:00.963 libvfio-user 0.0.1 00:02:00.963 00:02:00.963 User defined options 00:02:00.963 buildtype : debug 00:02:00.963 default_library: shared 00:02:00.963 libdir : /usr/local/lib 00:02:00.963 00:02:00.963 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.222 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:01.222 [1/37] Compiling C object samples/null.p/null.c.o 00:02:01.222 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:01.222 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:01.222 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:01.222 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:01.222 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:01.222 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:01.222 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:01.222 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:01.222 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:01.222 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:01.222 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:01.222 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:01.222 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:01.222 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:01.222 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:01.222 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:01.222 [18/37] Compiling C object samples/server.p/server.c.o 00:02:01.222 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:01.222 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:01.481 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:01.481 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:01.481 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:01.481 [24/37] Compiling C object samples/client.p/client.c.o 00:02:01.481 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:01.481 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:01.481 [27/37] Linking target samples/client 00:02:01.481 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:01.481 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:01.481 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:01.481 [31/37] Linking target test/unit_tests 00:02:01.743 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:01.743 [33/37] Linking target samples/null 00:02:01.743 [34/37] Linking target samples/lspci 00:02:01.743 [35/37] Linking target samples/server 00:02:01.743 [36/37] Linking target samples/gpio-pci-idio-16 00:02:01.743 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:01.743 INFO: autodetecting backend as ninja 00:02:01.743 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:01.743 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.004 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:02.004 ninja: no work to do. 00:02:10.147 CC lib/ut_mock/mock.o 00:02:10.147 CC lib/log/log.o 00:02:10.147 CC lib/log/log_flags.o 00:02:10.147 CC lib/log/log_deprecated.o 00:02:10.147 CC lib/ut/ut.o 00:02:10.147 LIB libspdk_ut_mock.a 00:02:10.147 LIB libspdk_log.a 00:02:10.147 LIB libspdk_ut.a 00:02:10.147 SO libspdk_ut_mock.so.6.0 00:02:10.147 SO libspdk_log.so.7.0 00:02:10.147 SO libspdk_ut.so.2.0 00:02:10.147 SYMLINK libspdk_ut_mock.so 00:02:10.147 SYMLINK libspdk_log.so 00:02:10.147 SYMLINK libspdk_ut.so 00:02:10.147 CXX lib/trace_parser/trace.o 00:02:10.147 CC lib/ioat/ioat.o 00:02:10.147 CC lib/dma/dma.o 00:02:10.147 CC lib/util/base64.o 00:02:10.147 CC lib/util/bit_array.o 00:02:10.147 CC lib/util/cpuset.o 00:02:10.147 CC lib/util/crc16.o 00:02:10.147 CC lib/util/crc32.o 00:02:10.147 CC lib/util/crc32c.o 00:02:10.147 CC lib/util/crc32_ieee.o 00:02:10.147 CC lib/util/crc64.o 00:02:10.147 CC lib/util/dif.o 00:02:10.147 CC lib/util/fd.o 00:02:10.147 CC lib/util/file.o 00:02:10.147 CC lib/util/hexlify.o 00:02:10.147 CC lib/util/iov.o 00:02:10.147 CC lib/util/math.o 00:02:10.147 CC lib/util/pipe.o 00:02:10.147 CC lib/util/strerror_tls.o 00:02:10.147 CC lib/util/string.o 00:02:10.147 CC lib/util/uuid.o 00:02:10.147 CC lib/util/fd_group.o 00:02:10.147 CC lib/util/zipf.o 00:02:10.147 CC lib/util/xor.o 00:02:10.147 CC lib/vfio_user/host/vfio_user_pci.o 00:02:10.147 CC lib/vfio_user/host/vfio_user.o 00:02:10.147 LIB libspdk_dma.a 00:02:10.408 SO libspdk_dma.so.4.0 00:02:10.408 LIB libspdk_ioat.a 00:02:10.408 SO libspdk_ioat.so.7.0 00:02:10.408 SYMLINK libspdk_dma.so 00:02:10.408 SYMLINK libspdk_ioat.so 00:02:10.408 LIB libspdk_vfio_user.a 00:02:10.408 SO libspdk_vfio_user.so.5.0 00:02:10.408 LIB libspdk_util.a 00:02:10.408 SYMLINK libspdk_vfio_user.so 00:02:10.669 SO libspdk_util.so.9.0 00:02:10.669 SYMLINK libspdk_util.so 00:02:10.669 LIB libspdk_trace_parser.a 00:02:10.929 SO libspdk_trace_parser.so.5.0 00:02:10.929 SYMLINK libspdk_trace_parser.so 00:02:10.929 CC lib/conf/conf.o 00:02:10.929 CC lib/idxd/idxd.o 00:02:10.929 CC lib/json/json_parse.o 00:02:11.187 CC lib/idxd/idxd_user.o 00:02:11.187 CC lib/json/json_util.o 00:02:11.187 CC lib/vmd/led.o 00:02:11.187 CC lib/vmd/vmd.o 00:02:11.187 CC lib/idxd/idxd_kernel.o 00:02:11.187 CC lib/json/json_write.o 00:02:11.187 CC lib/rdma/common.o 00:02:11.187 CC lib/rdma/rdma_verbs.o 00:02:11.187 CC lib/env_dpdk/env.o 00:02:11.187 CC lib/env_dpdk/memory.o 00:02:11.187 CC lib/env_dpdk/pci.o 00:02:11.187 CC lib/env_dpdk/init.o 00:02:11.187 CC lib/env_dpdk/threads.o 00:02:11.188 CC lib/env_dpdk/pci_ioat.o 00:02:11.188 CC lib/env_dpdk/pci_virtio.o 00:02:11.188 CC lib/env_dpdk/pci_vmd.o 00:02:11.188 CC lib/env_dpdk/pci_idxd.o 00:02:11.188 CC lib/env_dpdk/pci_event.o 00:02:11.188 CC lib/env_dpdk/sigbus_handler.o 00:02:11.188 CC lib/env_dpdk/pci_dpdk.o 00:02:11.188 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.188 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.188 LIB libspdk_conf.a 00:02:11.188 SO libspdk_conf.so.6.0 00:02:11.446 LIB libspdk_json.a 00:02:11.446 LIB libspdk_rdma.a 00:02:11.446 SYMLINK libspdk_conf.so 00:02:11.446 SO libspdk_json.so.6.0 00:02:11.446 SO libspdk_rdma.so.6.0 00:02:11.446 SYMLINK libspdk_json.so 00:02:11.446 SYMLINK libspdk_rdma.so 00:02:11.446 LIB libspdk_idxd.a 00:02:11.706 SO libspdk_idxd.so.12.0 00:02:11.706 LIB libspdk_vmd.a 00:02:11.706 SO libspdk_vmd.so.6.0 00:02:11.706 SYMLINK libspdk_idxd.so 00:02:11.706 SYMLINK libspdk_vmd.so 00:02:11.706 CC lib/jsonrpc/jsonrpc_server.o 00:02:11.706 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:11.706 CC lib/jsonrpc/jsonrpc_client.o 00:02:11.706 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.965 LIB libspdk_jsonrpc.a 00:02:11.965 SO libspdk_jsonrpc.so.6.0 00:02:12.225 SYMLINK libspdk_jsonrpc.so 00:02:12.225 LIB libspdk_env_dpdk.a 00:02:12.225 SO libspdk_env_dpdk.so.14.0 00:02:12.485 SYMLINK libspdk_env_dpdk.so 00:02:12.485 CC lib/rpc/rpc.o 00:02:12.744 LIB libspdk_rpc.a 00:02:12.744 SO libspdk_rpc.so.6.0 00:02:12.744 SYMLINK libspdk_rpc.so 00:02:13.315 CC lib/trace/trace.o 00:02:13.315 CC lib/trace/trace_flags.o 00:02:13.315 CC lib/keyring/keyring.o 00:02:13.315 CC lib/keyring/keyring_rpc.o 00:02:13.315 CC lib/trace/trace_rpc.o 00:02:13.315 CC lib/notify/notify.o 00:02:13.315 CC lib/notify/notify_rpc.o 00:02:13.315 LIB libspdk_notify.a 00:02:13.315 SO libspdk_notify.so.6.0 00:02:13.315 LIB libspdk_keyring.a 00:02:13.315 LIB libspdk_trace.a 00:02:13.315 SO libspdk_keyring.so.1.0 00:02:13.315 SO libspdk_trace.so.10.0 00:02:13.576 SYMLINK libspdk_notify.so 00:02:13.576 SYMLINK libspdk_keyring.so 00:02:13.576 SYMLINK libspdk_trace.so 00:02:13.837 CC lib/sock/sock.o 00:02:13.837 CC lib/sock/sock_rpc.o 00:02:13.837 CC lib/thread/thread.o 00:02:13.837 CC lib/thread/iobuf.o 00:02:14.098 LIB libspdk_sock.a 00:02:14.359 SO libspdk_sock.so.9.0 00:02:14.359 SYMLINK libspdk_sock.so 00:02:14.619 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.619 CC lib/nvme/nvme_ctrlr.o 00:02:14.619 CC lib/nvme/nvme_fabric.o 00:02:14.619 CC lib/nvme/nvme_ns_cmd.o 00:02:14.619 CC lib/nvme/nvme_ns.o 00:02:14.619 CC lib/nvme/nvme_pcie_common.o 00:02:14.619 CC lib/nvme/nvme_pcie.o 00:02:14.619 CC lib/nvme/nvme_qpair.o 00:02:14.619 CC lib/nvme/nvme.o 00:02:14.619 CC lib/nvme/nvme_quirks.o 00:02:14.619 CC lib/nvme/nvme_transport.o 00:02:14.619 CC lib/nvme/nvme_discovery.o 00:02:14.619 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.619 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.619 CC lib/nvme/nvme_tcp.o 00:02:14.619 CC lib/nvme/nvme_opal.o 00:02:14.619 CC lib/nvme/nvme_io_msg.o 00:02:14.619 CC lib/nvme/nvme_poll_group.o 00:02:14.619 CC lib/nvme/nvme_zns.o 00:02:14.619 CC lib/nvme/nvme_stubs.o 00:02:14.619 CC lib/nvme/nvme_auth.o 00:02:14.619 CC lib/nvme/nvme_cuse.o 00:02:14.619 CC lib/nvme/nvme_vfio_user.o 00:02:14.619 CC lib/nvme/nvme_rdma.o 00:02:15.189 LIB libspdk_thread.a 00:02:15.189 SO libspdk_thread.so.10.0 00:02:15.189 SYMLINK libspdk_thread.so 00:02:15.450 CC lib/accel/accel.o 00:02:15.710 CC lib/accel/accel_rpc.o 00:02:15.710 CC lib/accel/accel_sw.o 00:02:15.710 CC lib/init/json_config.o 00:02:15.710 CC lib/init/subsystem.o 00:02:15.710 CC lib/init/subsystem_rpc.o 00:02:15.710 CC lib/init/rpc.o 00:02:15.710 CC lib/blob/blobstore.o 00:02:15.710 CC lib/blob/request.o 00:02:15.710 CC lib/blob/zeroes.o 00:02:15.710 CC lib/blob/blob_bs_dev.o 00:02:15.710 CC lib/vfu_tgt/tgt_endpoint.o 00:02:15.710 CC lib/vfu_tgt/tgt_rpc.o 00:02:15.710 CC lib/virtio/virtio.o 00:02:15.710 CC lib/virtio/virtio_vhost_user.o 00:02:15.710 CC lib/virtio/virtio_vfio_user.o 00:02:15.710 CC lib/virtio/virtio_pci.o 00:02:15.971 LIB libspdk_init.a 00:02:15.971 SO libspdk_init.so.5.0 00:02:15.971 LIB libspdk_virtio.a 00:02:15.971 LIB libspdk_vfu_tgt.a 00:02:15.971 SO libspdk_vfu_tgt.so.3.0 00:02:15.971 SO libspdk_virtio.so.7.0 00:02:15.971 SYMLINK libspdk_init.so 00:02:15.971 SYMLINK libspdk_vfu_tgt.so 00:02:15.971 SYMLINK libspdk_virtio.so 00:02:16.233 CC lib/event/app.o 00:02:16.233 CC lib/event/reactor.o 00:02:16.233 CC lib/event/app_rpc.o 00:02:16.233 CC lib/event/log_rpc.o 00:02:16.233 CC lib/event/scheduler_static.o 00:02:16.494 LIB libspdk_accel.a 00:02:16.494 SO libspdk_accel.so.15.0 00:02:16.494 LIB libspdk_nvme.a 00:02:16.494 SYMLINK libspdk_accel.so 00:02:16.797 SO libspdk_nvme.so.13.0 00:02:16.797 LIB libspdk_event.a 00:02:16.797 SO libspdk_event.so.13.0 00:02:16.797 SYMLINK libspdk_event.so 00:02:16.797 CC lib/bdev/bdev.o 00:02:16.797 CC lib/bdev/bdev_rpc.o 00:02:16.797 CC lib/bdev/bdev_zone.o 00:02:16.797 CC lib/bdev/part.o 00:02:16.797 CC lib/bdev/scsi_nvme.o 00:02:17.057 SYMLINK libspdk_nvme.so 00:02:18.443 LIB libspdk_blob.a 00:02:18.443 SO libspdk_blob.so.11.0 00:02:18.443 SYMLINK libspdk_blob.so 00:02:18.703 CC lib/blobfs/blobfs.o 00:02:18.703 CC lib/blobfs/tree.o 00:02:18.703 CC lib/lvol/lvol.o 00:02:19.274 LIB libspdk_bdev.a 00:02:19.274 SO libspdk_bdev.so.15.0 00:02:19.274 SYMLINK libspdk_bdev.so 00:02:19.274 LIB libspdk_blobfs.a 00:02:19.274 SO libspdk_blobfs.so.10.0 00:02:19.536 LIB libspdk_lvol.a 00:02:19.536 SO libspdk_lvol.so.10.0 00:02:19.536 SYMLINK libspdk_blobfs.so 00:02:19.536 SYMLINK libspdk_lvol.so 00:02:19.536 CC lib/scsi/dev.o 00:02:19.536 CC lib/scsi/lun.o 00:02:19.536 CC lib/scsi/scsi_bdev.o 00:02:19.536 CC lib/scsi/port.o 00:02:19.536 CC lib/scsi/scsi.o 00:02:19.536 CC lib/scsi/scsi_pr.o 00:02:19.536 CC lib/scsi/scsi_rpc.o 00:02:19.536 CC lib/scsi/task.o 00:02:19.536 CC lib/ftl/ftl_core.o 00:02:19.536 CC lib/ftl/ftl_init.o 00:02:19.536 CC lib/ftl/ftl_io.o 00:02:19.536 CC lib/ftl/ftl_layout.o 00:02:19.536 CC lib/nbd/nbd.o 00:02:19.536 CC lib/ftl/ftl_debug.o 00:02:19.536 CC lib/ublk/ublk.o 00:02:19.536 CC lib/ftl/ftl_l2p.o 00:02:19.536 CC lib/ublk/ublk_rpc.o 00:02:19.536 CC lib/nbd/nbd_rpc.o 00:02:19.536 CC lib/ftl/ftl_sb.o 00:02:19.536 CC lib/ftl/ftl_l2p_flat.o 00:02:19.536 CC lib/ftl/ftl_nv_cache.o 00:02:19.536 CC lib/nvmf/ctrlr.o 00:02:19.536 CC lib/ftl/ftl_band.o 00:02:19.536 CC lib/nvmf/ctrlr_discovery.o 00:02:19.536 CC lib/nvmf/ctrlr_bdev.o 00:02:19.536 CC lib/ftl/ftl_band_ops.o 00:02:19.536 CC lib/nvmf/subsystem.o 00:02:19.536 CC lib/ftl/ftl_writer.o 00:02:19.536 CC lib/nvmf/nvmf.o 00:02:19.536 CC lib/ftl/ftl_rq.o 00:02:19.536 CC lib/nvmf/nvmf_rpc.o 00:02:19.536 CC lib/ftl/ftl_reloc.o 00:02:19.536 CC lib/nvmf/transport.o 00:02:19.536 CC lib/ftl/ftl_l2p_cache.o 00:02:19.536 CC lib/nvmf/tcp.o 00:02:19.536 CC lib/ftl/ftl_p2l.o 00:02:19.536 CC lib/nvmf/stubs.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.536 CC lib/nvmf/mdns_server.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.536 CC lib/nvmf/vfio_user.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.536 CC lib/nvmf/rdma.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.536 CC lib/nvmf/auth.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.536 CC lib/ftl/utils/ftl_md.o 00:02:19.536 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.536 CC lib/ftl/utils/ftl_conf.o 00:02:19.536 CC lib/ftl/utils/ftl_mempool.o 00:02:19.795 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.795 CC lib/ftl/utils/ftl_property.o 00:02:19.795 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.795 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.795 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.795 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.795 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.795 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.795 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:19.795 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.795 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.795 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.795 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.795 CC lib/ftl/base/ftl_base_dev.o 00:02:19.795 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.795 CC lib/ftl/ftl_trace.o 00:02:20.055 LIB libspdk_nbd.a 00:02:20.055 SO libspdk_nbd.so.7.0 00:02:20.055 LIB libspdk_scsi.a 00:02:20.316 SYMLINK libspdk_nbd.so 00:02:20.316 SO libspdk_scsi.so.9.0 00:02:20.316 LIB libspdk_ublk.a 00:02:20.316 SYMLINK libspdk_scsi.so 00:02:20.316 SO libspdk_ublk.so.3.0 00:02:20.316 SYMLINK libspdk_ublk.so 00:02:20.576 LIB libspdk_ftl.a 00:02:20.576 CC lib/vhost/vhost.o 00:02:20.576 CC lib/vhost/vhost_scsi.o 00:02:20.576 CC lib/vhost/vhost_rpc.o 00:02:20.576 CC lib/vhost/vhost_blk.o 00:02:20.576 CC lib/vhost/rte_vhost_user.o 00:02:20.576 CC lib/iscsi/conn.o 00:02:20.576 CC lib/iscsi/init_grp.o 00:02:20.576 CC lib/iscsi/iscsi.o 00:02:20.576 CC lib/iscsi/md5.o 00:02:20.576 CC lib/iscsi/param.o 00:02:20.576 CC lib/iscsi/portal_grp.o 00:02:20.576 CC lib/iscsi/tgt_node.o 00:02:20.576 CC lib/iscsi/iscsi_subsystem.o 00:02:20.576 CC lib/iscsi/iscsi_rpc.o 00:02:20.576 CC lib/iscsi/task.o 00:02:20.835 SO libspdk_ftl.so.9.0 00:02:21.095 SYMLINK libspdk_ftl.so 00:02:21.355 LIB libspdk_nvmf.a 00:02:21.615 SO libspdk_nvmf.so.18.0 00:02:21.615 LIB libspdk_vhost.a 00:02:21.615 SO libspdk_vhost.so.8.0 00:02:21.615 SYMLINK libspdk_nvmf.so 00:02:21.615 SYMLINK libspdk_vhost.so 00:02:21.876 LIB libspdk_iscsi.a 00:02:21.876 SO libspdk_iscsi.so.8.0 00:02:22.137 SYMLINK libspdk_iscsi.so 00:02:22.709 CC module/env_dpdk/env_dpdk_rpc.o 00:02:22.709 CC module/vfu_device/vfu_virtio.o 00:02:22.709 CC module/vfu_device/vfu_virtio_blk.o 00:02:22.709 CC module/vfu_device/vfu_virtio_scsi.o 00:02:22.709 CC module/vfu_device/vfu_virtio_rpc.o 00:02:22.709 LIB libspdk_env_dpdk_rpc.a 00:02:22.709 CC module/accel/iaa/accel_iaa_rpc.o 00:02:22.709 CC module/keyring/file/keyring.o 00:02:22.709 CC module/accel/error/accel_error.o 00:02:22.709 CC module/accel/iaa/accel_iaa.o 00:02:22.709 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:22.709 CC module/accel/error/accel_error_rpc.o 00:02:22.709 CC module/keyring/file/keyring_rpc.o 00:02:22.709 CC module/scheduler/gscheduler/gscheduler.o 00:02:22.709 CC module/accel/ioat/accel_ioat.o 00:02:22.709 CC module/accel/ioat/accel_ioat_rpc.o 00:02:22.709 CC module/accel/dsa/accel_dsa.o 00:02:22.709 CC module/keyring/linux/keyring.o 00:02:22.709 CC module/sock/posix/posix.o 00:02:22.709 CC module/accel/dsa/accel_dsa_rpc.o 00:02:22.709 CC module/keyring/linux/keyring_rpc.o 00:02:22.709 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:22.709 CC module/blob/bdev/blob_bdev.o 00:02:22.709 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.709 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.969 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.969 LIB libspdk_keyring_file.a 00:02:22.969 LIB libspdk_keyring_linux.a 00:02:22.969 LIB libspdk_scheduler_gscheduler.a 00:02:22.969 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.969 LIB libspdk_accel_error.a 00:02:22.969 SO libspdk_keyring_linux.so.1.0 00:02:22.969 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.969 SO libspdk_keyring_file.so.1.0 00:02:22.969 LIB libspdk_accel_ioat.a 00:02:22.969 LIB libspdk_scheduler_dynamic.a 00:02:22.969 LIB libspdk_accel_iaa.a 00:02:22.969 SO libspdk_accel_error.so.2.0 00:02:22.969 SO libspdk_accel_iaa.so.3.0 00:02:22.969 SO libspdk_accel_ioat.so.6.0 00:02:22.969 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.969 LIB libspdk_accel_dsa.a 00:02:22.969 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.969 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.969 SYMLINK libspdk_keyring_linux.so 00:02:22.969 LIB libspdk_blob_bdev.a 00:02:22.969 SYMLINK libspdk_keyring_file.so 00:02:22.970 SO libspdk_accel_dsa.so.5.0 00:02:22.970 SYMLINK libspdk_accel_iaa.so 00:02:22.970 SYMLINK libspdk_accel_ioat.so 00:02:22.970 SO libspdk_blob_bdev.so.11.0 00:02:22.970 SYMLINK libspdk_accel_error.so 00:02:22.970 SYMLINK libspdk_scheduler_dynamic.so 00:02:23.231 LIB libspdk_vfu_device.a 00:02:23.231 SYMLINK libspdk_accel_dsa.so 00:02:23.231 SYMLINK libspdk_blob_bdev.so 00:02:23.231 SO libspdk_vfu_device.so.3.0 00:02:23.231 SYMLINK libspdk_vfu_device.so 00:02:23.493 LIB libspdk_sock_posix.a 00:02:23.493 SO libspdk_sock_posix.so.6.0 00:02:23.493 SYMLINK libspdk_sock_posix.so 00:02:23.753 CC module/bdev/delay/vbdev_delay.o 00:02:23.753 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:23.753 CC module/bdev/error/vbdev_error.o 00:02:23.753 CC module/bdev/gpt/gpt.o 00:02:23.753 CC module/bdev/gpt/vbdev_gpt.o 00:02:23.753 CC module/bdev/error/vbdev_error_rpc.o 00:02:23.753 CC module/blobfs/bdev/blobfs_bdev.o 00:02:23.753 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:23.753 CC module/bdev/null/bdev_null.o 00:02:23.753 CC module/bdev/aio/bdev_aio.o 00:02:23.753 CC module/bdev/null/bdev_null_rpc.o 00:02:23.753 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.753 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.753 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.753 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.753 CC module/bdev/split/vbdev_split.o 00:02:23.753 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:23.753 CC module/bdev/split/vbdev_split_rpc.o 00:02:23.753 CC module/bdev/malloc/bdev_malloc.o 00:02:23.753 CC module/bdev/lvol/vbdev_lvol.o 00:02:23.753 CC module/bdev/passthru/vbdev_passthru.o 00:02:23.753 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:23.753 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:23.753 CC module/bdev/ftl/bdev_ftl.o 00:02:23.753 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.753 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:23.753 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:23.753 CC module/bdev/nvme/bdev_nvme.o 00:02:23.753 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:23.753 CC module/bdev/nvme/nvme_rpc.o 00:02:23.753 CC module/bdev/raid/bdev_raid.o 00:02:23.753 CC module/bdev/nvme/bdev_mdns_client.o 00:02:23.753 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.754 CC module/bdev/nvme/vbdev_opal.o 00:02:23.754 CC module/bdev/raid/bdev_raid_rpc.o 00:02:23.754 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.754 CC module/bdev/raid/bdev_raid_sb.o 00:02:23.754 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.754 CC module/bdev/raid/raid0.o 00:02:23.754 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.754 CC module/bdev/raid/raid1.o 00:02:23.754 CC module/bdev/raid/concat.o 00:02:24.014 LIB libspdk_blobfs_bdev.a 00:02:24.014 SO libspdk_blobfs_bdev.so.6.0 00:02:24.014 LIB libspdk_bdev_error.a 00:02:24.014 LIB libspdk_bdev_split.a 00:02:24.014 LIB libspdk_bdev_null.a 00:02:24.014 LIB libspdk_bdev_gpt.a 00:02:24.014 SO libspdk_bdev_gpt.so.6.0 00:02:24.014 SO libspdk_bdev_error.so.6.0 00:02:24.014 LIB libspdk_bdev_passthru.a 00:02:24.014 SO libspdk_bdev_null.so.6.0 00:02:24.014 SO libspdk_bdev_split.so.6.0 00:02:24.014 SYMLINK libspdk_blobfs_bdev.so 00:02:24.014 LIB libspdk_bdev_ftl.a 00:02:24.014 LIB libspdk_bdev_aio.a 00:02:24.014 LIB libspdk_bdev_delay.a 00:02:24.014 SYMLINK libspdk_bdev_gpt.so 00:02:24.014 SYMLINK libspdk_bdev_null.so 00:02:24.014 SO libspdk_bdev_passthru.so.6.0 00:02:24.014 LIB libspdk_bdev_zone_block.a 00:02:24.014 SO libspdk_bdev_aio.so.6.0 00:02:24.014 SO libspdk_bdev_ftl.so.6.0 00:02:24.014 SYMLINK libspdk_bdev_error.so 00:02:24.014 LIB libspdk_bdev_malloc.a 00:02:24.014 SO libspdk_bdev_delay.so.6.0 00:02:24.014 SYMLINK libspdk_bdev_split.so 00:02:24.014 SO libspdk_bdev_zone_block.so.6.0 00:02:24.014 SO libspdk_bdev_malloc.so.6.0 00:02:24.014 LIB libspdk_bdev_iscsi.a 00:02:24.014 SYMLINK libspdk_bdev_passthru.so 00:02:24.014 SYMLINK libspdk_bdev_aio.so 00:02:24.014 SYMLINK libspdk_bdev_ftl.so 00:02:24.275 SO libspdk_bdev_iscsi.so.6.0 00:02:24.275 SYMLINK libspdk_bdev_delay.so 00:02:24.275 LIB libspdk_bdev_virtio.a 00:02:24.275 SYMLINK libspdk_bdev_zone_block.so 00:02:24.275 LIB libspdk_bdev_lvol.a 00:02:24.275 SYMLINK libspdk_bdev_malloc.so 00:02:24.275 SO libspdk_bdev_virtio.so.6.0 00:02:24.275 SO libspdk_bdev_lvol.so.6.0 00:02:24.275 SYMLINK libspdk_bdev_iscsi.so 00:02:24.275 SYMLINK libspdk_bdev_virtio.so 00:02:24.275 SYMLINK libspdk_bdev_lvol.so 00:02:24.536 LIB libspdk_bdev_raid.a 00:02:24.536 SO libspdk_bdev_raid.so.6.0 00:02:24.802 SYMLINK libspdk_bdev_raid.so 00:02:25.746 LIB libspdk_bdev_nvme.a 00:02:25.746 SO libspdk_bdev_nvme.so.7.0 00:02:25.746 SYMLINK libspdk_bdev_nvme.so 00:02:26.690 CC module/event/subsystems/iobuf/iobuf.o 00:02:26.690 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:26.690 CC module/event/subsystems/vmd/vmd.o 00:02:26.690 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:26.690 CC module/event/subsystems/sock/sock.o 00:02:26.690 CC module/event/subsystems/keyring/keyring.o 00:02:26.690 CC module/event/subsystems/scheduler/scheduler.o 00:02:26.690 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:26.690 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:26.690 LIB libspdk_event_vhost_blk.a 00:02:26.690 LIB libspdk_event_iobuf.a 00:02:26.690 LIB libspdk_event_vfu_tgt.a 00:02:26.690 LIB libspdk_event_keyring.a 00:02:26.690 LIB libspdk_event_vmd.a 00:02:26.690 LIB libspdk_event_sock.a 00:02:26.690 LIB libspdk_event_scheduler.a 00:02:26.690 SO libspdk_event_vhost_blk.so.3.0 00:02:26.690 SO libspdk_event_iobuf.so.3.0 00:02:26.690 SO libspdk_event_keyring.so.1.0 00:02:26.690 SO libspdk_event_vfu_tgt.so.3.0 00:02:26.690 SO libspdk_event_vmd.so.6.0 00:02:26.690 SO libspdk_event_sock.so.5.0 00:02:26.690 SO libspdk_event_scheduler.so.4.0 00:02:26.690 SYMLINK libspdk_event_vhost_blk.so 00:02:26.690 SYMLINK libspdk_event_vfu_tgt.so 00:02:26.690 SYMLINK libspdk_event_keyring.so 00:02:26.690 SYMLINK libspdk_event_iobuf.so 00:02:26.690 SYMLINK libspdk_event_sock.so 00:02:26.690 SYMLINK libspdk_event_vmd.so 00:02:26.690 SYMLINK libspdk_event_scheduler.so 00:02:27.264 CC module/event/subsystems/accel/accel.o 00:02:27.264 LIB libspdk_event_accel.a 00:02:27.264 SO libspdk_event_accel.so.6.0 00:02:27.264 SYMLINK libspdk_event_accel.so 00:02:27.836 CC module/event/subsystems/bdev/bdev.o 00:02:27.836 LIB libspdk_event_bdev.a 00:02:27.836 SO libspdk_event_bdev.so.6.0 00:02:27.836 SYMLINK libspdk_event_bdev.so 00:02:28.480 CC module/event/subsystems/ublk/ublk.o 00:02:28.480 CC module/event/subsystems/scsi/scsi.o 00:02:28.480 CC module/event/subsystems/nbd/nbd.o 00:02:28.480 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:28.480 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:28.480 LIB libspdk_event_ublk.a 00:02:28.480 SO libspdk_event_ublk.so.3.0 00:02:28.480 LIB libspdk_event_scsi.a 00:02:28.480 LIB libspdk_event_nbd.a 00:02:28.480 SO libspdk_event_scsi.so.6.0 00:02:28.480 SO libspdk_event_nbd.so.6.0 00:02:28.480 SYMLINK libspdk_event_ublk.so 00:02:28.480 LIB libspdk_event_nvmf.a 00:02:28.480 SYMLINK libspdk_event_scsi.so 00:02:28.480 SYMLINK libspdk_event_nbd.so 00:02:28.742 SO libspdk_event_nvmf.so.6.0 00:02:28.742 SYMLINK libspdk_event_nvmf.so 00:02:29.004 CC module/event/subsystems/iscsi/iscsi.o 00:02:29.004 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:29.004 LIB libspdk_event_vhost_scsi.a 00:02:29.004 LIB libspdk_event_iscsi.a 00:02:29.264 SO libspdk_event_vhost_scsi.so.3.0 00:02:29.264 SO libspdk_event_iscsi.so.6.0 00:02:29.264 SYMLINK libspdk_event_vhost_scsi.so 00:02:29.264 SYMLINK libspdk_event_iscsi.so 00:02:29.526 SO libspdk.so.6.0 00:02:29.526 SYMLINK libspdk.so 00:02:29.787 CXX app/trace/trace.o 00:02:29.787 CC app/spdk_top/spdk_top.o 00:02:29.787 CC app/spdk_nvme_discover/discovery_aer.o 00:02:29.787 CC app/trace_record/trace_record.o 00:02:29.787 CC app/spdk_lspci/spdk_lspci.o 00:02:29.787 TEST_HEADER include/spdk/accel.h 00:02:29.787 CC app/spdk_nvme_perf/perf.o 00:02:29.788 CC app/spdk_nvme_identify/identify.o 00:02:29.788 CC test/rpc_client/rpc_client_test.o 00:02:29.788 TEST_HEADER include/spdk/assert.h 00:02:29.788 TEST_HEADER include/spdk/accel_module.h 00:02:29.788 TEST_HEADER include/spdk/base64.h 00:02:29.788 TEST_HEADER include/spdk/bdev.h 00:02:29.788 TEST_HEADER include/spdk/barrier.h 00:02:29.788 TEST_HEADER include/spdk/bdev_module.h 00:02:29.788 TEST_HEADER include/spdk/bit_array.h 00:02:29.788 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.788 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.788 TEST_HEADER include/spdk/bit_pool.h 00:02:29.788 TEST_HEADER include/spdk/blob.h 00:02:29.788 TEST_HEADER include/spdk/blobfs.h 00:02:29.788 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.788 TEST_HEADER include/spdk/config.h 00:02:29.788 TEST_HEADER include/spdk/cpuset.h 00:02:29.788 TEST_HEADER include/spdk/conf.h 00:02:29.788 TEST_HEADER include/spdk/crc16.h 00:02:29.788 TEST_HEADER include/spdk/crc32.h 00:02:29.788 TEST_HEADER include/spdk/crc64.h 00:02:29.788 TEST_HEADER include/spdk/dif.h 00:02:29.788 CC app/spdk_dd/spdk_dd.o 00:02:29.788 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:29.788 TEST_HEADER include/spdk/dma.h 00:02:29.788 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.788 TEST_HEADER include/spdk/event.h 00:02:29.788 TEST_HEADER include/spdk/endian.h 00:02:29.788 TEST_HEADER include/spdk/env.h 00:02:29.788 CC app/iscsi_tgt/iscsi_tgt.o 00:02:29.788 TEST_HEADER include/spdk/fd.h 00:02:29.788 TEST_HEADER include/spdk/file.h 00:02:29.788 TEST_HEADER include/spdk/fd_group.h 00:02:29.788 CC app/vhost/vhost.o 00:02:29.788 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.788 TEST_HEADER include/spdk/ftl.h 00:02:29.788 TEST_HEADER include/spdk/hexlify.h 00:02:29.788 TEST_HEADER include/spdk/histogram_data.h 00:02:29.788 CC app/nvmf_tgt/nvmf_main.o 00:02:29.788 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.788 TEST_HEADER include/spdk/idxd.h 00:02:29.788 TEST_HEADER include/spdk/init.h 00:02:29.788 TEST_HEADER include/spdk/ioat.h 00:02:29.788 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.788 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.788 TEST_HEADER include/spdk/json.h 00:02:29.788 TEST_HEADER include/spdk/keyring.h 00:02:29.788 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.788 TEST_HEADER include/spdk/keyring_module.h 00:02:29.788 CC app/spdk_tgt/spdk_tgt.o 00:02:30.052 TEST_HEADER include/spdk/log.h 00:02:30.052 TEST_HEADER include/spdk/likely.h 00:02:30.052 TEST_HEADER include/spdk/lvol.h 00:02:30.052 TEST_HEADER include/spdk/mmio.h 00:02:30.052 TEST_HEADER include/spdk/memory.h 00:02:30.052 TEST_HEADER include/spdk/notify.h 00:02:30.052 TEST_HEADER include/spdk/nbd.h 00:02:30.052 TEST_HEADER include/spdk/nvme.h 00:02:30.052 TEST_HEADER include/spdk/nvme_intel.h 00:02:30.052 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:30.052 TEST_HEADER include/spdk/nvme_spec.h 00:02:30.052 TEST_HEADER include/spdk/nvme_zns.h 00:02:30.052 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:30.052 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:30.052 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:30.052 TEST_HEADER include/spdk/nvmf.h 00:02:30.052 TEST_HEADER include/spdk/nvmf_spec.h 00:02:30.052 TEST_HEADER include/spdk/opal.h 00:02:30.052 TEST_HEADER include/spdk/nvmf_transport.h 00:02:30.052 TEST_HEADER include/spdk/opal_spec.h 00:02:30.052 TEST_HEADER include/spdk/pci_ids.h 00:02:30.052 TEST_HEADER include/spdk/pipe.h 00:02:30.052 TEST_HEADER include/spdk/queue.h 00:02:30.052 TEST_HEADER include/spdk/rpc.h 00:02:30.052 TEST_HEADER include/spdk/reduce.h 00:02:30.052 TEST_HEADER include/spdk/scheduler.h 00:02:30.052 TEST_HEADER include/spdk/scsi_spec.h 00:02:30.052 TEST_HEADER include/spdk/scsi.h 00:02:30.052 TEST_HEADER include/spdk/sock.h 00:02:30.052 TEST_HEADER include/spdk/stdinc.h 00:02:30.052 TEST_HEADER include/spdk/string.h 00:02:30.052 TEST_HEADER include/spdk/thread.h 00:02:30.052 TEST_HEADER include/spdk/trace_parser.h 00:02:30.052 TEST_HEADER include/spdk/trace.h 00:02:30.052 TEST_HEADER include/spdk/tree.h 00:02:30.052 TEST_HEADER include/spdk/util.h 00:02:30.052 TEST_HEADER include/spdk/ublk.h 00:02:30.052 TEST_HEADER include/spdk/uuid.h 00:02:30.052 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:30.052 TEST_HEADER include/spdk/version.h 00:02:30.052 TEST_HEADER include/spdk/vhost.h 00:02:30.052 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:30.052 TEST_HEADER include/spdk/vmd.h 00:02:30.052 TEST_HEADER include/spdk/xor.h 00:02:30.052 TEST_HEADER include/spdk/zipf.h 00:02:30.052 CXX test/cpp_headers/accel_module.o 00:02:30.053 CXX test/cpp_headers/accel.o 00:02:30.053 CXX test/cpp_headers/assert.o 00:02:30.053 CXX test/cpp_headers/barrier.o 00:02:30.053 CXX test/cpp_headers/base64.o 00:02:30.053 CXX test/cpp_headers/bdev.o 00:02:30.053 CXX test/cpp_headers/bdev_zone.o 00:02:30.053 CXX test/cpp_headers/bit_array.o 00:02:30.053 CXX test/cpp_headers/bdev_module.o 00:02:30.053 CXX test/cpp_headers/bit_pool.o 00:02:30.053 CXX test/cpp_headers/blob_bdev.o 00:02:30.053 CXX test/cpp_headers/blobfs.o 00:02:30.053 CXX test/cpp_headers/blob.o 00:02:30.053 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.053 CXX test/cpp_headers/config.o 00:02:30.053 CXX test/cpp_headers/conf.o 00:02:30.053 CXX test/cpp_headers/crc16.o 00:02:30.053 CXX test/cpp_headers/crc32.o 00:02:30.053 CXX test/cpp_headers/cpuset.o 00:02:30.053 CXX test/cpp_headers/crc64.o 00:02:30.053 CXX test/cpp_headers/dif.o 00:02:30.053 CXX test/cpp_headers/endian.o 00:02:30.053 CXX test/cpp_headers/dma.o 00:02:30.053 CXX test/cpp_headers/env_dpdk.o 00:02:30.053 CXX test/cpp_headers/env.o 00:02:30.053 CXX test/cpp_headers/event.o 00:02:30.053 CXX test/cpp_headers/ftl.o 00:02:30.053 CXX test/cpp_headers/fd_group.o 00:02:30.053 CXX test/cpp_headers/file.o 00:02:30.053 CXX test/cpp_headers/fd.o 00:02:30.053 CXX test/cpp_headers/gpt_spec.o 00:02:30.053 CXX test/cpp_headers/histogram_data.o 00:02:30.053 CXX test/cpp_headers/hexlify.o 00:02:30.053 CXX test/cpp_headers/idxd.o 00:02:30.053 CXX test/cpp_headers/idxd_spec.o 00:02:30.053 CXX test/cpp_headers/init.o 00:02:30.053 CXX test/cpp_headers/iscsi_spec.o 00:02:30.053 CXX test/cpp_headers/ioat.o 00:02:30.053 CXX test/cpp_headers/ioat_spec.o 00:02:30.053 CXX test/cpp_headers/keyring.o 00:02:30.053 CXX test/cpp_headers/json.o 00:02:30.053 CXX test/cpp_headers/jsonrpc.o 00:02:30.053 CXX test/cpp_headers/likely.o 00:02:30.053 CXX test/cpp_headers/keyring_module.o 00:02:30.053 CXX test/cpp_headers/memory.o 00:02:30.053 CXX test/cpp_headers/log.o 00:02:30.053 CXX test/cpp_headers/lvol.o 00:02:30.053 CXX test/cpp_headers/mmio.o 00:02:30.053 CXX test/cpp_headers/nbd.o 00:02:30.053 CXX test/cpp_headers/notify.o 00:02:30.053 CXX test/cpp_headers/nvme_intel.o 00:02:30.053 CXX test/cpp_headers/nvme.o 00:02:30.053 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.053 CXX test/cpp_headers/nvme_spec.o 00:02:30.053 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.053 CXX test/cpp_headers/nvme_zns.o 00:02:30.053 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.053 CXX test/cpp_headers/nvmf_spec.o 00:02:30.053 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.053 CXX test/cpp_headers/nvmf.o 00:02:30.053 CXX test/cpp_headers/nvmf_transport.o 00:02:30.053 CC examples/nvme/hello_world/hello_world.o 00:02:30.053 CXX test/cpp_headers/opal.o 00:02:30.053 CXX test/cpp_headers/opal_spec.o 00:02:30.053 CXX test/cpp_headers/pci_ids.o 00:02:30.053 CXX test/cpp_headers/pipe.o 00:02:30.053 CXX test/cpp_headers/queue.o 00:02:30.053 CXX test/cpp_headers/reduce.o 00:02:30.053 CXX test/cpp_headers/scheduler.o 00:02:30.053 CXX test/cpp_headers/rpc.o 00:02:30.053 CC examples/nvme/abort/abort.o 00:02:30.053 CC examples/nvme/arbitration/arbitration.o 00:02:30.053 CC test/event/reactor/reactor.o 00:02:30.053 CC examples/vmd/lsvmd/lsvmd.o 00:02:30.053 CC examples/nvme/reconnect/reconnect.o 00:02:30.053 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:30.053 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:30.053 CC examples/nvme/hotplug/hotplug.o 00:02:30.053 CC examples/idxd/perf/perf.o 00:02:30.053 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:30.053 CC test/event/event_perf/event_perf.o 00:02:30.053 CC test/event/reactor_perf/reactor_perf.o 00:02:30.053 CXX test/cpp_headers/scsi.o 00:02:30.053 CC examples/vmd/led/led.o 00:02:30.053 CC test/env/memory/memory_ut.o 00:02:30.053 CC test/nvme/aer/aer.o 00:02:30.053 CC test/app/histogram_perf/histogram_perf.o 00:02:30.053 CC test/nvme/e2edp/nvme_dp.o 00:02:30.053 CC examples/ioat/verify/verify.o 00:02:30.053 CC test/thread/poller_perf/poller_perf.o 00:02:30.053 CC test/nvme/compliance/nvme_compliance.o 00:02:30.053 CC examples/bdev/bdevperf/bdevperf.o 00:02:30.053 CC test/app/jsoncat/jsoncat.o 00:02:30.053 CC examples/util/zipf/zipf.o 00:02:30.053 CC test/nvme/simple_copy/simple_copy.o 00:02:30.053 CC examples/ioat/perf/perf.o 00:02:30.053 CC app/fio/nvme/fio_plugin.o 00:02:30.053 CC test/env/vtophys/vtophys.o 00:02:30.053 CC examples/bdev/hello_world/hello_bdev.o 00:02:30.053 CC test/nvme/reset/reset.o 00:02:30.053 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:30.053 CC test/env/pci/pci_ut.o 00:02:30.053 CC examples/sock/hello_world/hello_sock.o 00:02:30.053 CC test/nvme/boot_partition/boot_partition.o 00:02:30.053 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.053 CC test/nvme/sgl/sgl.o 00:02:30.053 CC examples/accel/perf/accel_perf.o 00:02:30.053 CC test/nvme/connect_stress/connect_stress.o 00:02:30.053 CC test/nvme/overhead/overhead.o 00:02:30.053 CC test/nvme/err_injection/err_injection.o 00:02:30.053 CC test/nvme/fdp/fdp.o 00:02:30.053 CC examples/thread/thread/thread_ex.o 00:02:30.053 CC test/nvme/startup/startup.o 00:02:30.053 CC test/app/stub/stub.o 00:02:30.053 CC test/nvme/fused_ordering/fused_ordering.o 00:02:30.327 CC test/nvme/reserve/reserve.o 00:02:30.327 CC examples/blob/cli/blobcli.o 00:02:30.327 CC test/event/app_repeat/app_repeat.o 00:02:30.327 CC examples/nvmf/nvmf/nvmf.o 00:02:30.327 CC test/blobfs/mkfs/mkfs.o 00:02:30.327 CC test/nvme/cuse/cuse.o 00:02:30.327 CC examples/blob/hello_world/hello_blob.o 00:02:30.327 CC test/event/scheduler/scheduler.o 00:02:30.327 CC test/bdev/bdevio/bdevio.o 00:02:30.327 CC app/fio/bdev/fio_plugin.o 00:02:30.327 CC test/accel/dif/dif.o 00:02:30.327 CC test/dma/test_dma/test_dma.o 00:02:30.327 CC test/app/bdev_svc/bdev_svc.o 00:02:30.327 LINK spdk_lspci 00:02:30.327 LINK spdk_nvme_discover 00:02:30.327 LINK rpc_client_test 00:02:30.589 LINK interrupt_tgt 00:02:30.589 CC test/lvol/esnap/esnap.o 00:02:30.589 CC test/env/mem_callbacks/mem_callbacks.o 00:02:30.589 LINK iscsi_tgt 00:02:30.589 LINK spdk_tgt 00:02:30.589 LINK nvmf_tgt 00:02:30.589 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:30.589 LINK vhost 00:02:30.589 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:30.589 LINK spdk_trace_record 00:02:30.589 LINK lsvmd 00:02:30.848 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:30.848 LINK event_perf 00:02:30.848 LINK reactor 00:02:30.848 LINK reactor_perf 00:02:30.848 LINK led 00:02:30.848 LINK jsoncat 00:02:30.848 LINK histogram_perf 00:02:30.848 LINK cmb_copy 00:02:30.848 LINK zipf 00:02:30.848 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:30.848 LINK env_dpdk_post_init 00:02:30.848 LINK pmr_persistence 00:02:30.848 LINK startup 00:02:30.848 LINK poller_perf 00:02:30.848 LINK vtophys 00:02:30.848 CXX test/cpp_headers/scsi_spec.o 00:02:30.848 CXX test/cpp_headers/sock.o 00:02:30.848 LINK boot_partition 00:02:30.848 CXX test/cpp_headers/stdinc.o 00:02:30.848 LINK connect_stress 00:02:30.848 CXX test/cpp_headers/string.o 00:02:30.848 CXX test/cpp_headers/thread.o 00:02:30.848 LINK app_repeat 00:02:30.848 CXX test/cpp_headers/trace.o 00:02:30.848 CXX test/cpp_headers/trace_parser.o 00:02:30.848 CXX test/cpp_headers/tree.o 00:02:30.848 CXX test/cpp_headers/util.o 00:02:30.848 CXX test/cpp_headers/ublk.o 00:02:30.848 CXX test/cpp_headers/uuid.o 00:02:30.848 CXX test/cpp_headers/version.o 00:02:30.848 CXX test/cpp_headers/vfio_user_pci.o 00:02:30.848 CXX test/cpp_headers/vfio_user_spec.o 00:02:30.848 LINK reserve 00:02:30.848 CXX test/cpp_headers/vhost.o 00:02:30.848 LINK err_injection 00:02:30.848 CXX test/cpp_headers/vmd.o 00:02:30.848 LINK spdk_dd 00:02:30.848 LINK stub 00:02:30.848 CXX test/cpp_headers/xor.o 00:02:30.848 LINK fused_ordering 00:02:30.848 CXX test/cpp_headers/zipf.o 00:02:30.848 LINK ioat_perf 00:02:30.848 LINK mkfs 00:02:30.848 LINK doorbell_aers 00:02:30.848 LINK hello_sock 00:02:30.848 LINK verify 00:02:30.848 LINK sgl 00:02:30.848 LINK simple_copy 00:02:30.848 LINK hotplug 00:02:30.848 LINK thread 00:02:30.848 LINK bdev_svc 00:02:30.848 LINK hello_bdev 00:02:30.848 LINK scheduler 00:02:30.848 LINK hello_world 00:02:30.848 LINK hello_blob 00:02:31.109 LINK nvme_dp 00:02:31.109 LINK overhead 00:02:31.109 LINK nvme_compliance 00:02:31.109 LINK reset 00:02:31.109 LINK aer 00:02:31.109 LINK nvmf 00:02:31.109 LINK abort 00:02:31.109 LINK arbitration 00:02:31.109 LINK fdp 00:02:31.109 LINK idxd_perf 00:02:31.109 LINK reconnect 00:02:31.109 LINK spdk_trace 00:02:31.109 LINK pci_ut 00:02:31.109 LINK test_dma 00:02:31.109 LINK bdevio 00:02:31.109 LINK dif 00:02:31.109 LINK nvme_manage 00:02:31.109 LINK blobcli 00:02:31.109 LINK accel_perf 00:02:31.370 LINK nvme_fuzz 00:02:31.370 LINK spdk_nvme 00:02:31.370 LINK spdk_nvme_perf 00:02:31.370 LINK spdk_bdev 00:02:31.370 LINK vhost_fuzz 00:02:31.370 LINK spdk_nvme_identify 00:02:31.370 LINK spdk_top 00:02:31.370 LINK mem_callbacks 00:02:31.370 LINK bdevperf 00:02:31.632 LINK memory_ut 00:02:31.893 LINK cuse 00:02:32.155 LINK iscsi_fuzz 00:02:34.706 LINK esnap 00:02:34.968 00:02:34.968 real 0m35.687s 00:02:34.968 user 5m15.987s 00:02:34.968 sys 3m24.483s 00:02:34.968 01:21:01 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:34.968 01:21:01 make -- common/autotest_common.sh@10 -- $ set +x 00:02:34.968 ************************************ 00:02:34.968 END TEST make 00:02:34.968 ************************************ 00:02:34.968 01:21:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:34.968 01:21:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:34.968 01:21:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:34.968 01:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.968 01:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:34.968 01:21:01 -- pm/common@44 -- $ pid=3596374 00:02:34.968 01:21:01 -- pm/common@50 -- $ kill -TERM 3596374 00:02:34.968 01:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.968 01:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:34.968 01:21:01 -- pm/common@44 -- $ pid=3596375 00:02:34.968 01:21:01 -- pm/common@50 -- $ kill -TERM 3596375 00:02:34.968 01:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.968 01:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:34.968 01:21:01 -- pm/common@44 -- $ pid=3596377 00:02:34.968 01:21:01 -- pm/common@50 -- $ kill -TERM 3596377 00:02:34.968 01:21:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.968 01:21:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:34.968 01:21:01 -- pm/common@44 -- $ pid=3596401 00:02:34.968 01:21:01 -- pm/common@50 -- $ sudo -E kill -TERM 3596401 00:02:35.229 01:21:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.229 01:21:01 -- nvmf/common.sh@7 -- # uname -s 00:02:35.229 01:21:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.229 01:21:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.229 01:21:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.229 01:21:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.229 01:21:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.229 01:21:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.229 01:21:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.229 01:21:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.229 01:21:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.229 01:21:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.229 01:21:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:35.229 01:21:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:35.229 01:21:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.229 01:21:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.229 01:21:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.229 01:21:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:35.229 01:21:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.229 01:21:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.229 01:21:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.229 01:21:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.229 01:21:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.229 01:21:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.229 01:21:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.229 01:21:01 -- paths/export.sh@5 -- # export PATH 00:02:35.229 01:21:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.229 01:21:01 -- nvmf/common.sh@47 -- # : 0 00:02:35.229 01:21:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:35.229 01:21:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:35.229 01:21:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:35.229 01:21:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.229 01:21:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.229 01:21:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:35.229 01:21:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:35.229 01:21:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:35.229 01:21:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.229 01:21:01 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.229 01:21:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.229 01:21:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.229 01:21:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.229 01:21:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.229 01:21:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.229 01:21:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.229 01:21:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.229 01:21:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.229 01:21:01 -- spdk/autotest.sh@48 -- # udevadm_pid=3672444 00:02:35.229 01:21:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:35.229 01:21:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.229 01:21:01 -- pm/common@17 -- # local monitor 00:02:35.229 01:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.229 01:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.229 01:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.229 01:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.229 01:21:01 -- pm/common@21 -- # date +%s 00:02:35.229 01:21:01 -- pm/common@21 -- # date +%s 00:02:35.229 01:21:01 -- pm/common@25 -- # sleep 1 00:02:35.230 01:21:01 -- pm/common@21 -- # date +%s 00:02:35.230 01:21:01 -- pm/common@21 -- # date +%s 00:02:35.230 01:21:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720740061 00:02:35.230 01:21:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720740061 00:02:35.230 01:21:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720740061 00:02:35.230 01:21:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720740061 00:02:35.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720740061_collect-vmstat.pm.log 00:02:35.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720740061_collect-cpu-temp.pm.log 00:02:35.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720740061_collect-cpu-load.pm.log 00:02:35.230 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720740061_collect-bmc-pm.bmc.pm.log 00:02:36.174 01:21:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.174 01:21:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:36.174 01:21:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:36.174 01:21:02 -- common/autotest_common.sh@10 -- # set +x 00:02:36.174 01:21:02 -- spdk/autotest.sh@59 -- # create_test_list 00:02:36.174 01:21:02 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:36.174 01:21:02 -- common/autotest_common.sh@10 -- # set +x 00:02:36.174 01:21:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.174 01:21:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.174 01:21:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.174 01:21:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.174 01:21:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.174 01:21:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:36.174 01:21:02 -- common/autotest_common.sh@1451 -- # uname 00:02:36.174 01:21:02 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:36.174 01:21:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:36.174 01:21:02 -- common/autotest_common.sh@1471 -- # uname 00:02:36.174 01:21:02 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:36.174 01:21:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:36.174 01:21:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:36.174 01:21:02 -- spdk/autotest.sh@72 -- # hash lcov 00:02:36.174 01:21:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:36.174 01:21:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:36.174 --rc lcov_branch_coverage=1 00:02:36.174 --rc lcov_function_coverage=1 00:02:36.174 --rc genhtml_branch_coverage=1 00:02:36.174 --rc genhtml_function_coverage=1 00:02:36.174 --rc genhtml_legend=1 00:02:36.174 --rc geninfo_all_blocks=1 00:02:36.174 ' 00:02:36.174 01:21:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:36.174 --rc lcov_branch_coverage=1 00:02:36.174 --rc lcov_function_coverage=1 00:02:36.174 --rc genhtml_branch_coverage=1 00:02:36.174 --rc genhtml_function_coverage=1 00:02:36.174 --rc genhtml_legend=1 00:02:36.174 --rc geninfo_all_blocks=1 00:02:36.174 ' 00:02:36.174 01:21:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:36.174 --rc lcov_branch_coverage=1 00:02:36.174 --rc lcov_function_coverage=1 00:02:36.174 --rc genhtml_branch_coverage=1 00:02:36.174 --rc genhtml_function_coverage=1 00:02:36.174 --rc genhtml_legend=1 00:02:36.174 --rc geninfo_all_blocks=1 00:02:36.174 --no-external' 00:02:36.174 01:21:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:36.174 --rc lcov_branch_coverage=1 00:02:36.174 --rc lcov_function_coverage=1 00:02:36.174 --rc genhtml_branch_coverage=1 00:02:36.174 --rc genhtml_function_coverage=1 00:02:36.174 --rc genhtml_legend=1 00:02:36.174 --rc geninfo_all_blocks=1 00:02:36.174 --no-external' 00:02:36.174 01:21:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:36.435 lcov: LCOV version 1.14 00:02:36.435 01:21:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:48.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:00.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:00.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:01.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:01.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:01.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:01.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:05.651 01:21:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:05.651 01:21:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:05.651 01:21:31 -- common/autotest_common.sh@10 -- # set +x 00:03:05.651 01:21:31 -- spdk/autotest.sh@91 -- # rm -f 00:03:05.651 01:21:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.861 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:09.861 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:09.861 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:09.861 01:21:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:09.861 01:21:35 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:09.862 01:21:35 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:09.862 01:21:35 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:09.862 01:21:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:09.862 01:21:35 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:09.862 01:21:35 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:09.862 01:21:35 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.862 01:21:35 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:09.862 01:21:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:09.862 01:21:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:09.862 01:21:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:09.862 01:21:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:09.862 01:21:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:09.862 01:21:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.862 No valid GPT data, bailing 00:03:09.862 01:21:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.862 01:21:35 -- scripts/common.sh@391 -- # pt= 00:03:09.862 01:21:35 -- scripts/common.sh@392 -- # return 1 00:03:09.862 01:21:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:09.862 1+0 records in 00:03:09.862 1+0 records out 00:03:09.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00143519 s, 731 MB/s 00:03:09.862 01:21:35 -- spdk/autotest.sh@118 -- # sync 00:03:09.862 01:21:35 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:09.862 01:21:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:09.862 01:21:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.002 01:21:43 -- spdk/autotest.sh@124 -- # uname -s 00:03:18.002 01:21:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:18.002 01:21:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.002 01:21:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.002 01:21:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.002 01:21:43 -- common/autotest_common.sh@10 -- # set +x 00:03:18.002 ************************************ 00:03:18.002 START TEST setup.sh 00:03:18.002 ************************************ 00:03:18.002 01:21:43 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.002 * Looking for test storage... 00:03:18.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.002 01:21:43 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:18.002 01:21:43 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:18.002 01:21:43 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.002 01:21:43 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.002 01:21:43 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.002 01:21:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.002 ************************************ 00:03:18.002 START TEST acl 00:03:18.002 ************************************ 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.002 * Looking for test storage... 00:03:18.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.002 01:21:44 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.002 01:21:44 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:18.002 01:21:44 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:18.002 01:21:44 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:18.002 01:21:44 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:18.002 01:21:44 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:18.002 01:21:44 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:18.002 01:21:44 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.002 01:21:44 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.208 01:21:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.208 01:21:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:22.208 01:21:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.208 01:21:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:22.208 01:21:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.208 01:21:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:25.508 Hugepages 00:03:25.508 node hugesize free / total 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 00:03:25.508 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.508 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.509 01:21:51 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.509 01:21:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:25.509 01:21:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:25.509 01:21:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.509 ************************************ 00:03:25.509 START TEST denied 00:03:25.509 ************************************ 00:03:25.509 01:21:51 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:25.509 01:21:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:25.509 01:21:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.509 01:21:51 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:25.509 01:21:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.509 01:21:51 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.827 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.827 01:21:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.035 00:03:34.035 real 0m8.584s 00:03:34.035 user 0m2.757s 00:03:34.035 sys 0m5.071s 00:03:34.035 01:22:00 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:34.035 01:22:00 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:34.035 ************************************ 00:03:34.035 END TEST denied 00:03:34.035 ************************************ 00:03:34.035 01:22:00 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.035 01:22:00 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:34.036 01:22:00 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:34.036 01:22:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:34.297 ************************************ 00:03:34.297 START TEST allowed 00:03:34.297 ************************************ 00:03:34.297 01:22:00 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:34.297 01:22:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:34.297 01:22:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:34.297 01:22:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.297 01:22:00 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.297 01:22:00 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:40.887 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.887 01:22:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:40.887 01:22:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:40.887 01:22:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:40.887 01:22:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.887 01:22:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.192 00:03:44.192 real 0m9.957s 00:03:44.192 user 0m2.874s 00:03:44.192 sys 0m5.374s 00:03:44.192 01:22:10 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:44.192 01:22:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:44.192 ************************************ 00:03:44.192 END TEST allowed 00:03:44.192 ************************************ 00:03:44.192 00:03:44.192 real 0m26.393s 00:03:44.192 user 0m8.555s 00:03:44.192 sys 0m15.469s 00:03:44.192 01:22:10 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:44.192 01:22:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.192 ************************************ 00:03:44.192 END TEST acl 00:03:44.192 ************************************ 00:03:44.192 01:22:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:44.192 01:22:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.192 01:22:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.192 01:22:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.192 ************************************ 00:03:44.192 START TEST hugepages 00:03:44.192 ************************************ 00:03:44.192 01:22:10 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:44.454 * Looking for test storage... 00:03:44.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105133776 kB' 'MemAvailable: 108834248 kB' 'Buffers: 4132 kB' 'Cached: 12079916 kB' 'SwapCached: 0 kB' 'Active: 9004252 kB' 'Inactive: 3696248 kB' 'Active(anon): 8512820 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620028 kB' 'Mapped: 187452 kB' 'Shmem: 7896368 kB' 'KReclaimable: 553736 kB' 'Slab: 1422704 kB' 'SReclaimable: 553736 kB' 'SUnreclaim: 868968 kB' 'KernelStack: 27728 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 10123352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237580 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.454 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.455 01:22:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:44.455 01:22:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.455 01:22:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.455 01:22:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.455 ************************************ 00:03:44.455 START TEST default_setup 00:03:44.455 ************************************ 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.455 01:22:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.673 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.673 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.673 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.673 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.674 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107305864 kB' 'MemAvailable: 111006064 kB' 'Buffers: 4132 kB' 'Cached: 12080052 kB' 'SwapCached: 0 kB' 'Active: 9021576 kB' 'Inactive: 3696248 kB' 'Active(anon): 8530144 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636676 kB' 'Mapped: 187852 kB' 'Shmem: 7896504 kB' 'KReclaimable: 553464 kB' 'Slab: 1420568 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 867104 kB' 'KernelStack: 27856 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10141548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.674 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107308972 kB' 'MemAvailable: 111009172 kB' 'Buffers: 4132 kB' 'Cached: 12080056 kB' 'SwapCached: 0 kB' 'Active: 9020496 kB' 'Inactive: 3696248 kB' 'Active(anon): 8529064 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636116 kB' 'Mapped: 187748 kB' 'Shmem: 7896508 kB' 'KReclaimable: 553464 kB' 'Slab: 1420552 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 867088 kB' 'KernelStack: 27936 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10143020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.675 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107310276 kB' 'MemAvailable: 111010476 kB' 'Buffers: 4132 kB' 'Cached: 12080076 kB' 'SwapCached: 0 kB' 'Active: 9020960 kB' 'Inactive: 3696248 kB' 'Active(anon): 8529528 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636512 kB' 'Mapped: 187748 kB' 'Shmem: 7896528 kB' 'KReclaimable: 553464 kB' 'Slab: 1420552 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 867088 kB' 'KernelStack: 27936 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10143208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.676 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.677 nr_hugepages=1024 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.677 resv_hugepages=0 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.677 surplus_hugepages=0 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.677 anon_hugepages=0 00:03:48.677 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107312396 kB' 'MemAvailable: 111012596 kB' 'Buffers: 4132 kB' 'Cached: 12080096 kB' 'SwapCached: 0 kB' 'Active: 9020956 kB' 'Inactive: 3696248 kB' 'Active(anon): 8529524 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636512 kB' 'Mapped: 187748 kB' 'Shmem: 7896548 kB' 'KReclaimable: 553464 kB' 'Slab: 1420552 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 867088 kB' 'KernelStack: 27984 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10141608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237964 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58709180 kB' 'MemUsed: 6949828 kB' 'SwapCached: 0 kB' 'Active: 2307304 kB' 'Inactive: 283520 kB' 'Active(anon): 2149556 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508180 kB' 'Mapped: 49184 kB' 'AnonPages: 85972 kB' 'Shmem: 2066912 kB' 'KernelStack: 13112 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300888 kB' 'Slab: 712352 kB' 'SReclaimable: 300888 kB' 'SUnreclaim: 411464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.679 node0=1024 expecting 1024 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.679 00:03:48.679 real 0m4.157s 00:03:48.679 user 0m1.607s 00:03:48.679 sys 0m2.546s 00:03:48.679 01:22:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.680 01:22:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:48.680 ************************************ 00:03:48.680 END TEST default_setup 00:03:48.680 ************************************ 00:03:48.680 01:22:14 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:48.680 01:22:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.680 01:22:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.680 01:22:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.680 ************************************ 00:03:48.680 START TEST per_node_1G_alloc 00:03:48.680 ************************************ 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.680 01:22:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.888 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:52.888 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107343620 kB' 'MemAvailable: 111043820 kB' 'Buffers: 4132 kB' 'Cached: 12080208 kB' 'SwapCached: 0 kB' 'Active: 9019672 kB' 'Inactive: 3696248 kB' 'Active(anon): 8528240 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634836 kB' 'Mapped: 186616 kB' 'Shmem: 7896660 kB' 'KReclaimable: 553464 kB' 'Slab: 1420456 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 866992 kB' 'KernelStack: 27904 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10131184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237900 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.888 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.889 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107350056 kB' 'MemAvailable: 111050256 kB' 'Buffers: 4132 kB' 'Cached: 12080208 kB' 'SwapCached: 0 kB' 'Active: 9019988 kB' 'Inactive: 3696248 kB' 'Active(anon): 8528556 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635152 kB' 'Mapped: 186616 kB' 'Shmem: 7896660 kB' 'KReclaimable: 553464 kB' 'Slab: 1420440 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 866976 kB' 'KernelStack: 27808 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10131200 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237900 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.890 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.891 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107350940 kB' 'MemAvailable: 111051140 kB' 'Buffers: 4132 kB' 'Cached: 12080228 kB' 'SwapCached: 0 kB' 'Active: 9019360 kB' 'Inactive: 3696248 kB' 'Active(anon): 8527928 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634548 kB' 'Mapped: 186576 kB' 'Shmem: 7896680 kB' 'KReclaimable: 553464 kB' 'Slab: 1420408 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 866944 kB' 'KernelStack: 27760 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10128352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.892 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.893 nr_hugepages=1024 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.893 resv_hugepages=0 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.893 surplus_hugepages=0 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.893 anon_hugepages=0 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.893 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107350436 kB' 'MemAvailable: 111050636 kB' 'Buffers: 4132 kB' 'Cached: 12080252 kB' 'SwapCached: 0 kB' 'Active: 9019204 kB' 'Inactive: 3696248 kB' 'Active(anon): 8527772 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634416 kB' 'Mapped: 186576 kB' 'Shmem: 7896704 kB' 'KReclaimable: 553464 kB' 'Slab: 1420360 kB' 'SReclaimable: 553464 kB' 'SUnreclaim: 866896 kB' 'KernelStack: 27792 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10128376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237804 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.894 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.895 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59767668 kB' 'MemUsed: 5891340 kB' 'SwapCached: 0 kB' 'Active: 2303948 kB' 'Inactive: 283520 kB' 'Active(anon): 2146200 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508324 kB' 'Mapped: 48428 kB' 'AnonPages: 82292 kB' 'Shmem: 2067056 kB' 'KernelStack: 13016 kB' 'PageTables: 2608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300888 kB' 'Slab: 712356 kB' 'SReclaimable: 300888 kB' 'SUnreclaim: 411468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.896 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.897 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47582476 kB' 'MemUsed: 13097364 kB' 'SwapCached: 0 kB' 'Active: 6714948 kB' 'Inactive: 3412728 kB' 'Active(anon): 6381264 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9576104 kB' 'Mapped: 138148 kB' 'AnonPages: 551724 kB' 'Shmem: 5829692 kB' 'KernelStack: 14760 kB' 'PageTables: 6208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252576 kB' 'Slab: 708004 kB' 'SReclaimable: 252576 kB' 'SUnreclaim: 455428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.898 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.899 node0=512 expecting 512 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.899 node1=512 expecting 512 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.899 00:03:52.899 real 0m4.013s 00:03:52.899 user 0m1.556s 00:03:52.899 sys 0m2.524s 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.899 01:22:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.899 ************************************ 00:03:52.899 END TEST per_node_1G_alloc 00:03:52.899 ************************************ 00:03:52.899 01:22:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.899 01:22:18 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:52.899 01:22:18 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:52.899 01:22:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.899 ************************************ 00:03:52.899 START TEST even_2G_alloc 00:03:52.899 ************************************ 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.899 01:22:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.110 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.110 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107330276 kB' 'MemAvailable: 111030452 kB' 'Buffers: 4132 kB' 'Cached: 12080404 kB' 'SwapCached: 0 kB' 'Active: 9023248 kB' 'Inactive: 3696248 kB' 'Active(anon): 8531816 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638320 kB' 'Mapped: 188032 kB' 'Shmem: 7896856 kB' 'KReclaimable: 553440 kB' 'Slab: 1419928 kB' 'SReclaimable: 553440 kB' 'SUnreclaim: 866488 kB' 'KernelStack: 27888 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10167704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.110 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.111 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107328064 kB' 'MemAvailable: 111028224 kB' 'Buffers: 4132 kB' 'Cached: 12080404 kB' 'SwapCached: 0 kB' 'Active: 9024748 kB' 'Inactive: 3696248 kB' 'Active(anon): 8533316 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639864 kB' 'Mapped: 188368 kB' 'Shmem: 7896856 kB' 'KReclaimable: 553424 kB' 'Slab: 1419872 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 866448 kB' 'KernelStack: 27824 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10169452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237856 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.112 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.113 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107328064 kB' 'MemAvailable: 111028224 kB' 'Buffers: 4132 kB' 'Cached: 12080436 kB' 'SwapCached: 0 kB' 'Active: 9019452 kB' 'Inactive: 3696248 kB' 'Active(anon): 8528020 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634544 kB' 'Mapped: 187500 kB' 'Shmem: 7896888 kB' 'KReclaimable: 553424 kB' 'Slab: 1419888 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 866464 kB' 'KernelStack: 27856 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10163644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.114 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.115 nr_hugepages=1024 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.115 resv_hugepages=0 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.115 surplus_hugepages=0 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.115 anon_hugepages=0 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.115 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107327620 kB' 'MemAvailable: 111027780 kB' 'Buffers: 4132 kB' 'Cached: 12080456 kB' 'SwapCached: 0 kB' 'Active: 9019536 kB' 'Inactive: 3696248 kB' 'Active(anon): 8528104 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634636 kB' 'Mapped: 187500 kB' 'Shmem: 7896908 kB' 'KReclaimable: 553424 kB' 'Slab: 1419888 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 866464 kB' 'KernelStack: 27872 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10163872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.116 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59756256 kB' 'MemUsed: 5902752 kB' 'SwapCached: 0 kB' 'Active: 2306572 kB' 'Inactive: 283520 kB' 'Active(anon): 2148824 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508412 kB' 'Mapped: 48528 kB' 'AnonPages: 85016 kB' 'Shmem: 2067144 kB' 'KernelStack: 13160 kB' 'PageTables: 2924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300848 kB' 'Slab: 712024 kB' 'SReclaimable: 300848 kB' 'SUnreclaim: 411176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.117 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.118 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47571488 kB' 'MemUsed: 13108352 kB' 'SwapCached: 0 kB' 'Active: 6713024 kB' 'Inactive: 3412728 kB' 'Active(anon): 6379340 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9576220 kB' 'Mapped: 138972 kB' 'AnonPages: 549624 kB' 'Shmem: 5829808 kB' 'KernelStack: 14712 kB' 'PageTables: 6072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252576 kB' 'Slab: 707864 kB' 'SReclaimable: 252576 kB' 'SUnreclaim: 455288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.119 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.120 node0=512 expecting 512 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:57.120 node1=512 expecting 512 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.120 00:03:57.120 real 0m4.028s 00:03:57.120 user 0m1.590s 00:03:57.120 sys 0m2.500s 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:57.120 01:22:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 ************************************ 00:03:57.120 END TEST even_2G_alloc 00:03:57.120 ************************************ 00:03:57.120 01:22:23 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:57.120 01:22:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.120 01:22:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.120 01:22:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 ************************************ 00:03:57.120 START TEST odd_alloc 00:03:57.120 ************************************ 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.120 01:22:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.424 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.424 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:00.687 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107342172 kB' 'MemAvailable: 111042332 kB' 'Buffers: 4132 kB' 'Cached: 12080588 kB' 'SwapCached: 0 kB' 'Active: 9021440 kB' 'Inactive: 3696248 kB' 'Active(anon): 8530008 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635760 kB' 'Mapped: 187628 kB' 'Shmem: 7897040 kB' 'KReclaimable: 553424 kB' 'Slab: 1419308 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865884 kB' 'KernelStack: 28016 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10166200 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237964 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107340624 kB' 'MemAvailable: 111040784 kB' 'Buffers: 4132 kB' 'Cached: 12080592 kB' 'SwapCached: 0 kB' 'Active: 9021832 kB' 'Inactive: 3696248 kB' 'Active(anon): 8530400 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636228 kB' 'Mapped: 187596 kB' 'Shmem: 7897044 kB' 'KReclaimable: 553424 kB' 'Slab: 1419308 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865884 kB' 'KernelStack: 28048 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10167836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238012 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.688 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107340876 kB' 'MemAvailable: 111041036 kB' 'Buffers: 4132 kB' 'Cached: 12080612 kB' 'SwapCached: 0 kB' 'Active: 9021504 kB' 'Inactive: 3696248 kB' 'Active(anon): 8530072 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636292 kB' 'Mapped: 187520 kB' 'Shmem: 7897064 kB' 'KReclaimable: 553424 kB' 'Slab: 1419312 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865888 kB' 'KernelStack: 28016 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10166236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:00.692 nr_hugepages=1025 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.692 resv_hugepages=0 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.692 surplus_hugepages=0 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.692 anon_hugepages=0 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107339928 kB' 'MemAvailable: 111040088 kB' 'Buffers: 4132 kB' 'Cached: 12080632 kB' 'SwapCached: 0 kB' 'Active: 9021184 kB' 'Inactive: 3696248 kB' 'Active(anon): 8529752 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635904 kB' 'Mapped: 187520 kB' 'Shmem: 7897084 kB' 'KReclaimable: 553424 kB' 'Slab: 1419352 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865928 kB' 'KernelStack: 28080 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10166256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238012 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.693 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59761156 kB' 'MemUsed: 5897852 kB' 'SwapCached: 0 kB' 'Active: 2307912 kB' 'Inactive: 283520 kB' 'Active(anon): 2150164 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508436 kB' 'Mapped: 48548 kB' 'AnonPages: 86140 kB' 'Shmem: 2067168 kB' 'KernelStack: 13384 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300848 kB' 'Slab: 711572 kB' 'SReclaimable: 300848 kB' 'SUnreclaim: 410724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.694 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.957 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47578936 kB' 'MemUsed: 13100904 kB' 'SwapCached: 0 kB' 'Active: 6713416 kB' 'Inactive: 3412728 kB' 'Active(anon): 6379732 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9576368 kB' 'Mapped: 138972 kB' 'AnonPages: 549868 kB' 'Shmem: 5829956 kB' 'KernelStack: 14664 kB' 'PageTables: 5980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252576 kB' 'Slab: 707780 kB' 'SReclaimable: 252576 kB' 'SUnreclaim: 455204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.958 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:00.959 node0=512 expecting 513 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:00.959 node1=513 expecting 512 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:00.959 00:04:00.959 real 0m3.956s 00:04:00.959 user 0m1.565s 00:04:00.959 sys 0m2.426s 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.959 01:22:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.959 ************************************ 00:04:00.959 END TEST odd_alloc 00:04:00.959 ************************************ 00:04:00.959 01:22:27 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:00.959 01:22:27 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.959 01:22:27 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.959 01:22:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.959 ************************************ 00:04:00.959 START TEST custom_alloc 00:04:00.959 ************************************ 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.959 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.960 01:22:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.175 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:05.175 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106271700 kB' 'MemAvailable: 109971860 kB' 'Buffers: 4132 kB' 'Cached: 12080772 kB' 'SwapCached: 0 kB' 'Active: 9023704 kB' 'Inactive: 3696248 kB' 'Active(anon): 8532272 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638452 kB' 'Mapped: 187540 kB' 'Shmem: 7897224 kB' 'KReclaimable: 553424 kB' 'Slab: 1419360 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865936 kB' 'KernelStack: 27904 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10165776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.175 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106276792 kB' 'MemAvailable: 109976952 kB' 'Buffers: 4132 kB' 'Cached: 12080772 kB' 'SwapCached: 0 kB' 'Active: 9022828 kB' 'Inactive: 3696248 kB' 'Active(anon): 8531396 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637600 kB' 'Mapped: 187532 kB' 'Shmem: 7897224 kB' 'KReclaimable: 553424 kB' 'Slab: 1419344 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865920 kB' 'KernelStack: 27872 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10165792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.176 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.177 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106277500 kB' 'MemAvailable: 109977660 kB' 'Buffers: 4132 kB' 'Cached: 12080772 kB' 'SwapCached: 0 kB' 'Active: 9022700 kB' 'Inactive: 3696248 kB' 'Active(anon): 8531268 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637472 kB' 'Mapped: 187532 kB' 'Shmem: 7897224 kB' 'KReclaimable: 553424 kB' 'Slab: 1419380 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865956 kB' 'KernelStack: 27856 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10165812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.178 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.179 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.180 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:05.181 nr_hugepages=1536 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.181 resv_hugepages=0 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.181 surplus_hugepages=0 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.181 anon_hugepages=0 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106278152 kB' 'MemAvailable: 109978312 kB' 'Buffers: 4132 kB' 'Cached: 12080828 kB' 'SwapCached: 0 kB' 'Active: 9022408 kB' 'Inactive: 3696248 kB' 'Active(anon): 8530976 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637076 kB' 'Mapped: 187532 kB' 'Shmem: 7897280 kB' 'KReclaimable: 553424 kB' 'Slab: 1419380 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865956 kB' 'KernelStack: 27840 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10165836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.181 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.182 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59765080 kB' 'MemUsed: 5893928 kB' 'SwapCached: 0 kB' 'Active: 2308036 kB' 'Inactive: 283520 kB' 'Active(anon): 2150288 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508492 kB' 'Mapped: 48560 kB' 'AnonPages: 86272 kB' 'Shmem: 2067224 kB' 'KernelStack: 13112 kB' 'PageTables: 2772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300848 kB' 'Slab: 711292 kB' 'SReclaimable: 300848 kB' 'SUnreclaim: 410444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.183 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.184 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 46511916 kB' 'MemUsed: 14167924 kB' 'SwapCached: 0 kB' 'Active: 6714820 kB' 'Inactive: 3412728 kB' 'Active(anon): 6381136 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9576496 kB' 'Mapped: 138972 kB' 'AnonPages: 551196 kB' 'Shmem: 5830084 kB' 'KernelStack: 14744 kB' 'PageTables: 6220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 252576 kB' 'Slab: 708088 kB' 'SReclaimable: 252576 kB' 'SUnreclaim: 455512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.185 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.186 node0=512 expecting 512 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:05.186 node1=1024 expecting 1024 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:05.186 00:04:05.186 real 0m4.020s 00:04:05.186 user 0m1.621s 00:04:05.186 sys 0m2.465s 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.186 01:22:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.186 ************************************ 00:04:05.186 END TEST custom_alloc 00:04:05.186 ************************************ 00:04:05.186 01:22:31 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:05.186 01:22:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.186 01:22:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.186 01:22:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.186 ************************************ 00:04:05.186 START TEST no_shrink_alloc 00:04:05.186 ************************************ 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.186 01:22:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.399 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:09.399 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.399 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:09.399 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.399 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107338948 kB' 'MemAvailable: 111039108 kB' 'Buffers: 4132 kB' 'Cached: 12080952 kB' 'SwapCached: 0 kB' 'Active: 9023888 kB' 'Inactive: 3696248 kB' 'Active(anon): 8532456 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638044 kB' 'Mapped: 186744 kB' 'Shmem: 7897404 kB' 'KReclaimable: 553424 kB' 'Slab: 1419300 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865876 kB' 'KernelStack: 27808 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10132460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.400 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107340776 kB' 'MemAvailable: 111040936 kB' 'Buffers: 4132 kB' 'Cached: 12080956 kB' 'SwapCached: 0 kB' 'Active: 9023248 kB' 'Inactive: 3696248 kB' 'Active(anon): 8531816 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637360 kB' 'Mapped: 186728 kB' 'Shmem: 7897408 kB' 'KReclaimable: 553424 kB' 'Slab: 1419292 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865868 kB' 'KernelStack: 27776 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10132480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237772 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.401 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.402 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107341156 kB' 'MemAvailable: 111041316 kB' 'Buffers: 4132 kB' 'Cached: 12080972 kB' 'SwapCached: 0 kB' 'Active: 9022780 kB' 'Inactive: 3696248 kB' 'Active(anon): 8531348 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637352 kB' 'Mapped: 186652 kB' 'Shmem: 7897424 kB' 'KReclaimable: 553424 kB' 'Slab: 1419280 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865856 kB' 'KernelStack: 27776 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10132500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237772 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.403 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.404 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.405 nr_hugepages=1024 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.405 resv_hugepages=0 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.405 surplus_hugepages=0 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.405 anon_hugepages=0 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107343700 kB' 'MemAvailable: 111043860 kB' 'Buffers: 4132 kB' 'Cached: 12080996 kB' 'SwapCached: 0 kB' 'Active: 9022920 kB' 'Inactive: 3696248 kB' 'Active(anon): 8531488 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637444 kB' 'Mapped: 186652 kB' 'Shmem: 7897448 kB' 'KReclaimable: 553424 kB' 'Slab: 1419248 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865824 kB' 'KernelStack: 27776 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10133776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237756 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.405 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.406 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58725296 kB' 'MemUsed: 6933712 kB' 'SwapCached: 0 kB' 'Active: 2305908 kB' 'Inactive: 283520 kB' 'Active(anon): 2148160 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508528 kB' 'Mapped: 48504 kB' 'AnonPages: 84120 kB' 'Shmem: 2067260 kB' 'KernelStack: 13032 kB' 'PageTables: 2648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300848 kB' 'Slab: 711044 kB' 'SReclaimable: 300848 kB' 'SUnreclaim: 410196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.407 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.408 node0=1024 expecting 1024 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.408 01:22:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.768 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:12.768 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.768 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107338120 kB' 'MemAvailable: 111038280 kB' 'Buffers: 4132 kB' 'Cached: 12081112 kB' 'SwapCached: 0 kB' 'Active: 9025248 kB' 'Inactive: 3696248 kB' 'Active(anon): 8533816 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639580 kB' 'Mapped: 186708 kB' 'Shmem: 7897564 kB' 'KReclaimable: 553424 kB' 'Slab: 1418736 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865312 kB' 'KernelStack: 27744 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10136840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238012 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.033 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.034 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107338440 kB' 'MemAvailable: 111038600 kB' 'Buffers: 4132 kB' 'Cached: 12081112 kB' 'SwapCached: 0 kB' 'Active: 9025332 kB' 'Inactive: 3696248 kB' 'Active(anon): 8533900 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639616 kB' 'Mapped: 186708 kB' 'Shmem: 7897564 kB' 'KReclaimable: 553424 kB' 'Slab: 1418728 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865304 kB' 'KernelStack: 27872 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10136860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237980 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.035 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.036 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107337632 kB' 'MemAvailable: 111037792 kB' 'Buffers: 4132 kB' 'Cached: 12081132 kB' 'SwapCached: 0 kB' 'Active: 9026068 kB' 'Inactive: 3696248 kB' 'Active(anon): 8534636 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640312 kB' 'Mapped: 187200 kB' 'Shmem: 7897584 kB' 'KReclaimable: 553424 kB' 'Slab: 1418728 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865304 kB' 'KernelStack: 27920 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10139688 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237996 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.037 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.038 nr_hugepages=1024 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.038 resv_hugepages=0 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.038 surplus_hugepages=0 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.038 anon_hugepages=0 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.038 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107331892 kB' 'MemAvailable: 111032052 kB' 'Buffers: 4132 kB' 'Cached: 12081156 kB' 'SwapCached: 0 kB' 'Active: 9030176 kB' 'Inactive: 3696248 kB' 'Active(anon): 8538744 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644400 kB' 'Mapped: 187616 kB' 'Shmem: 7897608 kB' 'KReclaimable: 553424 kB' 'Slab: 1418728 kB' 'SReclaimable: 553424 kB' 'SUnreclaim: 865304 kB' 'KernelStack: 27920 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10142776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238048 kB' 'VmallocChunk: 0 kB' 'Percpu: 127872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3992948 kB' 'DirectMap2M: 57552896 kB' 'DirectMap1G: 74448896 kB' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.039 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.040 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58738060 kB' 'MemUsed: 6920948 kB' 'SwapCached: 0 kB' 'Active: 2309176 kB' 'Inactive: 283520 kB' 'Active(anon): 2151428 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508664 kB' 'Mapped: 48516 kB' 'AnonPages: 87224 kB' 'Shmem: 2067396 kB' 'KernelStack: 13064 kB' 'PageTables: 2772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 300848 kB' 'Slab: 710860 kB' 'SReclaimable: 300848 kB' 'SUnreclaim: 410012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.041 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.042 node0=1024 expecting 1024 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.042 00:04:13.042 real 0m8.055s 00:04:13.042 user 0m3.171s 00:04:13.042 sys 0m5.016s 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.042 01:22:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.042 ************************************ 00:04:13.042 END TEST no_shrink_alloc 00:04:13.042 ************************************ 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.042 01:22:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.042 00:04:13.042 real 0m28.857s 00:04:13.042 user 0m11.353s 00:04:13.042 sys 0m17.898s 00:04:13.042 01:22:39 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.042 01:22:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.042 ************************************ 00:04:13.042 END TEST hugepages 00:04:13.042 ************************************ 00:04:13.303 01:22:39 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:13.303 01:22:39 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.303 01:22:39 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.303 01:22:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.303 ************************************ 00:04:13.303 START TEST driver 00:04:13.303 ************************************ 00:04:13.303 01:22:39 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:13.303 * Looking for test storage... 00:04:13.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.303 01:22:39 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:13.303 01:22:39 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.303 01:22:39 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.595 01:22:44 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:18.595 01:22:44 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.595 01:22:44 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.595 01:22:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.595 ************************************ 00:04:18.595 START TEST guess_driver 00:04:18.595 ************************************ 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:18.595 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:18.595 Looking for driver=vfio-pci 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.595 01:22:44 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.804 01:22:48 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.089 00:04:28.089 real 0m9.029s 00:04:28.089 user 0m3.053s 00:04:28.089 sys 0m5.232s 00:04:28.089 01:22:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.089 01:22:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.089 ************************************ 00:04:28.089 END TEST guess_driver 00:04:28.089 ************************************ 00:04:28.089 00:04:28.089 real 0m14.187s 00:04:28.089 user 0m4.625s 00:04:28.089 sys 0m8.110s 00:04:28.089 01:22:53 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.089 01:22:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.089 ************************************ 00:04:28.089 END TEST driver 00:04:28.089 ************************************ 00:04:28.089 01:22:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:28.089 01:22:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.089 01:22:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.089 01:22:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:28.089 ************************************ 00:04:28.089 START TEST devices 00:04:28.089 ************************************ 00:04:28.089 01:22:53 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:28.089 * Looking for test storage... 00:04:28.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:28.089 01:22:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:28.089 01:22:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:28.089 01:22:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.089 01:22:53 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.296 01:22:58 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:32.296 01:22:58 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.297 01:22:58 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:32.297 01:22:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:32.297 01:22:58 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:32.297 No valid GPT data, bailing 00:04:32.297 01:22:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:32.297 01:22:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:32.297 01:22:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:32.297 01:22:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:32.297 01:22:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:32.297 01:22:58 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:32.297 01:22:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:32.297 01:22:58 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.297 01:22:58 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.297 01:22:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:32.297 ************************************ 00:04:32.297 START TEST nvme_mount 00:04:32.297 ************************************ 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:32.297 01:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:32.869 Creating new GPT entries in memory. 00:04:32.869 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.869 other utilities. 00:04:32.869 01:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.869 01:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.869 01:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.869 01:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.869 01:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:34.255 Creating new GPT entries in memory. 00:04:34.255 The operation has completed successfully. 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3716027 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.255 01:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.566 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.566 01:23:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.828 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.828 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.828 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.828 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.828 01:23:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.040 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.041 01:23:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.347 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.348 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.348 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.348 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.348 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.609 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.609 00:04:45.609 real 0m13.757s 00:04:45.609 user 0m4.174s 00:04:45.609 sys 0m7.452s 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.609 01:23:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:45.609 ************************************ 00:04:45.609 END TEST nvme_mount 00:04:45.609 ************************************ 00:04:45.609 01:23:11 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:45.609 01:23:11 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.609 01:23:11 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.610 01:23:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.871 ************************************ 00:04:45.871 START TEST dm_mount 00:04:45.871 ************************************ 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:45.871 01:23:11 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:46.812 Creating new GPT entries in memory. 00:04:46.812 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:46.812 other utilities. 00:04:46.812 01:23:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:46.812 01:23:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.812 01:23:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.812 01:23:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.812 01:23:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:47.753 Creating new GPT entries in memory. 00:04:47.753 The operation has completed successfully. 00:04:47.753 01:23:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:47.753 01:23:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.753 01:23:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.753 01:23:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.753 01:23:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:48.696 The operation has completed successfully. 00:04:48.696 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.696 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.696 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3721646 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.958 01:23:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.169 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.170 01:23:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.170 01:23:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:56.466 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:56.466 00:04:56.466 real 0m10.782s 00:04:56.466 user 0m2.860s 00:04:56.466 sys 0m4.965s 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.466 01:23:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:56.466 ************************************ 00:04:56.466 END TEST dm_mount 00:04:56.466 ************************************ 00:04:56.466 01:23:22 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:56.466 01:23:22 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:56.466 01:23:22 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.466 01:23:22 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.466 01:23:22 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.726 01:23:22 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.726 01:23:22 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.726 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:56.726 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:56.726 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.726 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.726 01:23:23 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:56.726 01:23:23 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.985 01:23:23 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.985 01:23:23 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.985 01:23:23 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.985 01:23:23 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.985 01:23:23 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:56.985 00:04:56.985 real 0m29.405s 00:04:56.985 user 0m8.774s 00:04:56.985 sys 0m15.430s 00:04:56.985 01:23:23 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.985 01:23:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:56.985 ************************************ 00:04:56.985 END TEST devices 00:04:56.985 ************************************ 00:04:56.985 00:04:56.985 real 1m39.244s 00:04:56.985 user 0m33.454s 00:04:56.985 sys 0m57.185s 00:04:56.985 01:23:23 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.985 01:23:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.985 ************************************ 00:04:56.985 END TEST setup.sh 00:04:56.985 ************************************ 00:04:56.985 01:23:23 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:01.289 Hugepages 00:05:01.289 node hugesize free / total 00:05:01.289 node0 1048576kB 0 / 0 00:05:01.289 node0 2048kB 2048 / 2048 00:05:01.289 node1 1048576kB 0 / 0 00:05:01.289 node1 2048kB 0 / 0 00:05:01.289 00:05:01.289 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.289 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:01.289 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:01.289 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:01.289 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:01.289 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:01.289 01:23:26 -- spdk/autotest.sh@130 -- # uname -s 00:05:01.289 01:23:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:01.289 01:23:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:01.289 01:23:27 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.605 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.605 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:06.516 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:06.516 01:23:32 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:07.455 01:23:33 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:07.455 01:23:33 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:07.455 01:23:33 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.455 01:23:33 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:07.455 01:23:33 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:07.455 01:23:33 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:07.455 01:23:33 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.455 01:23:33 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.455 01:23:33 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:07.455 01:23:33 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:07.455 01:23:33 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:07.455 01:23:33 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.662 Waiting for block devices as requested 00:05:11.662 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:11.662 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:11.923 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:11.923 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:12.184 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:12.184 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:12.184 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:12.184 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:12.445 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:12.446 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:12.446 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:12.446 01:23:38 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:12.446 01:23:38 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:05:12.446 01:23:38 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:12.446 01:23:38 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:12.446 01:23:38 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:12.446 01:23:38 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:12.446 01:23:38 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:05:12.446 01:23:38 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:12.446 01:23:38 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:12.446 01:23:38 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:12.446 01:23:38 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:12.446 01:23:38 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:12.446 01:23:38 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:12.446 01:23:38 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:12.446 01:23:38 -- common/autotest_common.sh@1553 -- # continue 00:05:12.446 01:23:38 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:12.446 01:23:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.446 01:23:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.707 01:23:38 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:12.707 01:23:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:12.707 01:23:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.707 01:23:38 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.916 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:16.916 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:16.916 01:23:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:16.916 01:23:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.916 01:23:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 01:23:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:16.916 01:23:42 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:16.916 01:23:42 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:16.916 01:23:42 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:16.916 01:23:42 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:16.916 01:23:42 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:16.916 01:23:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:16.916 01:23:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:16.916 01:23:42 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.916 01:23:42 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.916 01:23:42 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:16.916 01:23:42 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:16.916 01:23:42 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:16.916 01:23:42 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:16.916 01:23:42 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:16.916 01:23:42 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:05:16.916 01:23:42 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:16.916 01:23:42 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:16.916 01:23:42 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:16.916 01:23:42 -- common/autotest_common.sh@1589 -- # return 0 00:05:16.916 01:23:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:16.916 01:23:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:16.916 01:23:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:16.916 01:23:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:16.916 01:23:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:16.916 01:23:42 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:16.916 01:23:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 01:23:42 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:16.916 01:23:42 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.916 01:23:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.916 01:23:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.916 01:23:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 ************************************ 00:05:16.916 START TEST env 00:05:16.916 ************************************ 00:05:16.916 01:23:42 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.916 * Looking for test storage... 00:05:16.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:16.916 01:23:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.916 01:23:43 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.916 01:23:43 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.916 01:23:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 ************************************ 00:05:16.916 START TEST env_memory 00:05:16.916 ************************************ 00:05:16.916 01:23:43 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.916 00:05:16.916 00:05:16.916 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.916 http://cunit.sourceforge.net/ 00:05:16.916 00:05:16.916 00:05:16.916 Suite: memory 00:05:16.916 Test: alloc and free memory map ...[2024-07-12 01:23:43.172321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.916 passed 00:05:16.916 Test: mem map translation ...[2024-07-12 01:23:43.198078] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.916 [2024-07-12 01:23:43.198110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.916 [2024-07-12 01:23:43.198158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.916 [2024-07-12 01:23:43.198166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.916 passed 00:05:16.916 Test: mem map registration ...[2024-07-12 01:23:43.253627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:16.916 [2024-07-12 01:23:43.253650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:16.916 passed 00:05:17.178 Test: mem map adjacent registrations ...passed 00:05:17.178 00:05:17.178 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.178 suites 1 1 n/a 0 0 00:05:17.178 tests 4 4 4 0 0 00:05:17.178 asserts 152 152 152 0 n/a 00:05:17.178 00:05:17.178 Elapsed time = 0.192 seconds 00:05:17.178 00:05:17.178 real 0m0.207s 00:05:17.178 user 0m0.192s 00:05:17.178 sys 0m0.014s 00:05:17.178 01:23:43 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.178 01:23:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:17.178 ************************************ 00:05:17.178 END TEST env_memory 00:05:17.178 ************************************ 00:05:17.178 01:23:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:17.178 01:23:43 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.178 01:23:43 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.178 01:23:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.178 ************************************ 00:05:17.178 START TEST env_vtophys 00:05:17.178 ************************************ 00:05:17.178 01:23:43 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:17.178 EAL: lib.eal log level changed from notice to debug 00:05:17.178 EAL: Detected lcore 0 as core 0 on socket 0 00:05:17.178 EAL: Detected lcore 1 as core 1 on socket 0 00:05:17.178 EAL: Detected lcore 2 as core 2 on socket 0 00:05:17.178 EAL: Detected lcore 3 as core 3 on socket 0 00:05:17.178 EAL: Detected lcore 4 as core 4 on socket 0 00:05:17.178 EAL: Detected lcore 5 as core 5 on socket 0 00:05:17.178 EAL: Detected lcore 6 as core 6 on socket 0 00:05:17.178 EAL: Detected lcore 7 as core 7 on socket 0 00:05:17.178 EAL: Detected lcore 8 as core 8 on socket 0 00:05:17.178 EAL: Detected lcore 9 as core 9 on socket 0 00:05:17.178 EAL: Detected lcore 10 as core 10 on socket 0 00:05:17.178 EAL: Detected lcore 11 as core 11 on socket 0 00:05:17.178 EAL: Detected lcore 12 as core 12 on socket 0 00:05:17.178 EAL: Detected lcore 13 as core 13 on socket 0 00:05:17.178 EAL: Detected lcore 14 as core 14 on socket 0 00:05:17.178 EAL: Detected lcore 15 as core 15 on socket 0 00:05:17.178 EAL: Detected lcore 16 as core 16 on socket 0 00:05:17.178 EAL: Detected lcore 17 as core 17 on socket 0 00:05:17.178 EAL: Detected lcore 18 as core 18 on socket 0 00:05:17.178 EAL: Detected lcore 19 as core 19 on socket 0 00:05:17.178 EAL: Detected lcore 20 as core 20 on socket 0 00:05:17.178 EAL: Detected lcore 21 as core 21 on socket 0 00:05:17.178 EAL: Detected lcore 22 as core 22 on socket 0 00:05:17.178 EAL: Detected lcore 23 as core 23 on socket 0 00:05:17.178 EAL: Detected lcore 24 as core 24 on socket 0 00:05:17.178 EAL: Detected lcore 25 as core 25 on socket 0 00:05:17.178 EAL: Detected lcore 26 as core 26 on socket 0 00:05:17.178 EAL: Detected lcore 27 as core 27 on socket 0 00:05:17.178 EAL: Detected lcore 28 as core 28 on socket 0 00:05:17.178 EAL: Detected lcore 29 as core 29 on socket 0 00:05:17.178 EAL: Detected lcore 30 as core 30 on socket 0 00:05:17.178 EAL: Detected lcore 31 as core 31 on socket 0 00:05:17.178 EAL: Detected lcore 32 as core 32 on socket 0 00:05:17.178 EAL: Detected lcore 33 as core 33 on socket 0 00:05:17.178 EAL: Detected lcore 34 as core 34 on socket 0 00:05:17.178 EAL: Detected lcore 35 as core 35 on socket 0 00:05:17.178 EAL: Detected lcore 36 as core 0 on socket 1 00:05:17.179 EAL: Detected lcore 37 as core 1 on socket 1 00:05:17.179 EAL: Detected lcore 38 as core 2 on socket 1 00:05:17.179 EAL: Detected lcore 39 as core 3 on socket 1 00:05:17.179 EAL: Detected lcore 40 as core 4 on socket 1 00:05:17.179 EAL: Detected lcore 41 as core 5 on socket 1 00:05:17.179 EAL: Detected lcore 42 as core 6 on socket 1 00:05:17.179 EAL: Detected lcore 43 as core 7 on socket 1 00:05:17.179 EAL: Detected lcore 44 as core 8 on socket 1 00:05:17.179 EAL: Detected lcore 45 as core 9 on socket 1 00:05:17.179 EAL: Detected lcore 46 as core 10 on socket 1 00:05:17.179 EAL: Detected lcore 47 as core 11 on socket 1 00:05:17.179 EAL: Detected lcore 48 as core 12 on socket 1 00:05:17.179 EAL: Detected lcore 49 as core 13 on socket 1 00:05:17.179 EAL: Detected lcore 50 as core 14 on socket 1 00:05:17.179 EAL: Detected lcore 51 as core 15 on socket 1 00:05:17.179 EAL: Detected lcore 52 as core 16 on socket 1 00:05:17.179 EAL: Detected lcore 53 as core 17 on socket 1 00:05:17.179 EAL: Detected lcore 54 as core 18 on socket 1 00:05:17.179 EAL: Detected lcore 55 as core 19 on socket 1 00:05:17.179 EAL: Detected lcore 56 as core 20 on socket 1 00:05:17.179 EAL: Detected lcore 57 as core 21 on socket 1 00:05:17.179 EAL: Detected lcore 58 as core 22 on socket 1 00:05:17.179 EAL: Detected lcore 59 as core 23 on socket 1 00:05:17.179 EAL: Detected lcore 60 as core 24 on socket 1 00:05:17.179 EAL: Detected lcore 61 as core 25 on socket 1 00:05:17.179 EAL: Detected lcore 62 as core 26 on socket 1 00:05:17.179 EAL: Detected lcore 63 as core 27 on socket 1 00:05:17.179 EAL: Detected lcore 64 as core 28 on socket 1 00:05:17.179 EAL: Detected lcore 65 as core 29 on socket 1 00:05:17.179 EAL: Detected lcore 66 as core 30 on socket 1 00:05:17.179 EAL: Detected lcore 67 as core 31 on socket 1 00:05:17.179 EAL: Detected lcore 68 as core 32 on socket 1 00:05:17.179 EAL: Detected lcore 69 as core 33 on socket 1 00:05:17.179 EAL: Detected lcore 70 as core 34 on socket 1 00:05:17.179 EAL: Detected lcore 71 as core 35 on socket 1 00:05:17.179 EAL: Detected lcore 72 as core 0 on socket 0 00:05:17.179 EAL: Detected lcore 73 as core 1 on socket 0 00:05:17.179 EAL: Detected lcore 74 as core 2 on socket 0 00:05:17.179 EAL: Detected lcore 75 as core 3 on socket 0 00:05:17.179 EAL: Detected lcore 76 as core 4 on socket 0 00:05:17.179 EAL: Detected lcore 77 as core 5 on socket 0 00:05:17.179 EAL: Detected lcore 78 as core 6 on socket 0 00:05:17.179 EAL: Detected lcore 79 as core 7 on socket 0 00:05:17.179 EAL: Detected lcore 80 as core 8 on socket 0 00:05:17.179 EAL: Detected lcore 81 as core 9 on socket 0 00:05:17.179 EAL: Detected lcore 82 as core 10 on socket 0 00:05:17.179 EAL: Detected lcore 83 as core 11 on socket 0 00:05:17.179 EAL: Detected lcore 84 as core 12 on socket 0 00:05:17.179 EAL: Detected lcore 85 as core 13 on socket 0 00:05:17.179 EAL: Detected lcore 86 as core 14 on socket 0 00:05:17.179 EAL: Detected lcore 87 as core 15 on socket 0 00:05:17.179 EAL: Detected lcore 88 as core 16 on socket 0 00:05:17.179 EAL: Detected lcore 89 as core 17 on socket 0 00:05:17.179 EAL: Detected lcore 90 as core 18 on socket 0 00:05:17.179 EAL: Detected lcore 91 as core 19 on socket 0 00:05:17.179 EAL: Detected lcore 92 as core 20 on socket 0 00:05:17.179 EAL: Detected lcore 93 as core 21 on socket 0 00:05:17.179 EAL: Detected lcore 94 as core 22 on socket 0 00:05:17.179 EAL: Detected lcore 95 as core 23 on socket 0 00:05:17.179 EAL: Detected lcore 96 as core 24 on socket 0 00:05:17.179 EAL: Detected lcore 97 as core 25 on socket 0 00:05:17.179 EAL: Detected lcore 98 as core 26 on socket 0 00:05:17.179 EAL: Detected lcore 99 as core 27 on socket 0 00:05:17.179 EAL: Detected lcore 100 as core 28 on socket 0 00:05:17.179 EAL: Detected lcore 101 as core 29 on socket 0 00:05:17.179 EAL: Detected lcore 102 as core 30 on socket 0 00:05:17.179 EAL: Detected lcore 103 as core 31 on socket 0 00:05:17.179 EAL: Detected lcore 104 as core 32 on socket 0 00:05:17.179 EAL: Detected lcore 105 as core 33 on socket 0 00:05:17.179 EAL: Detected lcore 106 as core 34 on socket 0 00:05:17.179 EAL: Detected lcore 107 as core 35 on socket 0 00:05:17.179 EAL: Detected lcore 108 as core 0 on socket 1 00:05:17.179 EAL: Detected lcore 109 as core 1 on socket 1 00:05:17.179 EAL: Detected lcore 110 as core 2 on socket 1 00:05:17.179 EAL: Detected lcore 111 as core 3 on socket 1 00:05:17.179 EAL: Detected lcore 112 as core 4 on socket 1 00:05:17.179 EAL: Detected lcore 113 as core 5 on socket 1 00:05:17.179 EAL: Detected lcore 114 as core 6 on socket 1 00:05:17.179 EAL: Detected lcore 115 as core 7 on socket 1 00:05:17.179 EAL: Detected lcore 116 as core 8 on socket 1 00:05:17.179 EAL: Detected lcore 117 as core 9 on socket 1 00:05:17.179 EAL: Detected lcore 118 as core 10 on socket 1 00:05:17.179 EAL: Detected lcore 119 as core 11 on socket 1 00:05:17.179 EAL: Detected lcore 120 as core 12 on socket 1 00:05:17.179 EAL: Detected lcore 121 as core 13 on socket 1 00:05:17.179 EAL: Detected lcore 122 as core 14 on socket 1 00:05:17.179 EAL: Detected lcore 123 as core 15 on socket 1 00:05:17.179 EAL: Detected lcore 124 as core 16 on socket 1 00:05:17.179 EAL: Detected lcore 125 as core 17 on socket 1 00:05:17.179 EAL: Detected lcore 126 as core 18 on socket 1 00:05:17.179 EAL: Detected lcore 127 as core 19 on socket 1 00:05:17.179 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:17.179 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:17.179 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:17.179 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:17.179 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:17.179 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:17.179 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:17.179 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:17.179 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:17.179 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:17.179 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:17.179 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:17.179 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:17.179 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:17.179 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:17.179 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:17.179 EAL: Maximum logical cores by configuration: 128 00:05:17.179 EAL: Detected CPU lcores: 128 00:05:17.179 EAL: Detected NUMA nodes: 2 00:05:17.179 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:17.179 EAL: Detected shared linkage of DPDK 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:17.179 EAL: Registered [vdev] bus. 00:05:17.179 EAL: bus.vdev log level changed from disabled to notice 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:17.179 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:17.179 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:17.179 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:17.179 EAL: No shared files mode enabled, IPC will be disabled 00:05:17.179 EAL: No shared files mode enabled, IPC is disabled 00:05:17.179 EAL: Bus pci wants IOVA as 'DC' 00:05:17.179 EAL: Bus vdev wants IOVA as 'DC' 00:05:17.179 EAL: Buses did not request a specific IOVA mode. 00:05:17.179 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:17.179 EAL: Selected IOVA mode 'VA' 00:05:17.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.179 EAL: Probing VFIO support... 00:05:17.179 EAL: IOMMU type 1 (Type 1) is supported 00:05:17.179 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:17.179 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:17.179 EAL: VFIO support initialized 00:05:17.179 EAL: Ask a virtual area of 0x2e000 bytes 00:05:17.179 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:17.179 EAL: Setting up physically contiguous memory... 00:05:17.179 EAL: Setting maximum number of open files to 524288 00:05:17.179 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:17.179 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:17.179 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:17.179 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.179 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:17.179 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.179 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.179 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:17.179 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:17.179 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.180 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:17.180 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.180 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.180 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:17.180 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:17.180 EAL: Hugepages will be freed exactly as allocated. 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: TSC frequency is ~2400000 KHz 00:05:17.180 EAL: Main lcore 0 is ready (tid=7f87cc249a00;cpuset=[0]) 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 0 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 2MB 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:17.180 EAL: Mem event callback 'spdk:(nil)' registered 00:05:17.180 00:05:17.180 00:05:17.180 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.180 http://cunit.sourceforge.net/ 00:05:17.180 00:05:17.180 00:05:17.180 Suite: components_suite 00:05:17.180 Test: vtophys_malloc_test ...passed 00:05:17.180 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 4MB 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was shrunk by 4MB 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 6MB 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was shrunk by 6MB 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 10MB 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was shrunk by 10MB 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 18MB 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was shrunk by 18MB 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 34MB 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was shrunk by 34MB 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 66MB 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was shrunk by 66MB 00:05:17.180 EAL: Trying to obtain current memory policy. 00:05:17.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.180 EAL: Restoring previous memory policy: 4 00:05:17.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.180 EAL: request: mp_malloc_sync 00:05:17.180 EAL: No shared files mode enabled, IPC is disabled 00:05:17.180 EAL: Heap on socket 0 was expanded by 130MB 00:05:17.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.441 EAL: request: mp_malloc_sync 00:05:17.441 EAL: No shared files mode enabled, IPC is disabled 00:05:17.441 EAL: Heap on socket 0 was shrunk by 130MB 00:05:17.441 EAL: Trying to obtain current memory policy. 00:05:17.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.441 EAL: Restoring previous memory policy: 4 00:05:17.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.441 EAL: request: mp_malloc_sync 00:05:17.441 EAL: No shared files mode enabled, IPC is disabled 00:05:17.441 EAL: Heap on socket 0 was expanded by 258MB 00:05:17.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.441 EAL: request: mp_malloc_sync 00:05:17.441 EAL: No shared files mode enabled, IPC is disabled 00:05:17.441 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.441 EAL: Trying to obtain current memory policy. 00:05:17.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.441 EAL: Restoring previous memory policy: 4 00:05:17.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.441 EAL: request: mp_malloc_sync 00:05:17.441 EAL: No shared files mode enabled, IPC is disabled 00:05:17.441 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.701 EAL: request: mp_malloc_sync 00:05:17.701 EAL: No shared files mode enabled, IPC is disabled 00:05:17.701 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.701 EAL: Trying to obtain current memory policy. 00:05:17.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.701 EAL: Restoring previous memory policy: 4 00:05:17.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.701 EAL: request: mp_malloc_sync 00:05:17.701 EAL: No shared files mode enabled, IPC is disabled 00:05:17.701 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.963 EAL: request: mp_malloc_sync 00:05:17.963 EAL: No shared files mode enabled, IPC is disabled 00:05:17.963 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:17.963 passed 00:05:17.963 00:05:17.963 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.963 suites 1 1 n/a 0 0 00:05:17.963 tests 2 2 2 0 0 00:05:17.963 asserts 497 497 497 0 n/a 00:05:17.963 00:05:17.963 Elapsed time = 0.647 seconds 00:05:17.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.963 EAL: request: mp_malloc_sync 00:05:17.963 EAL: No shared files mode enabled, IPC is disabled 00:05:17.963 EAL: Heap on socket 0 was shrunk by 2MB 00:05:17.963 EAL: No shared files mode enabled, IPC is disabled 00:05:17.963 EAL: No shared files mode enabled, IPC is disabled 00:05:17.963 EAL: No shared files mode enabled, IPC is disabled 00:05:17.963 00:05:17.963 real 0m0.772s 00:05:17.963 user 0m0.404s 00:05:17.963 sys 0m0.338s 00:05:17.963 01:23:44 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.963 01:23:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 ************************************ 00:05:17.963 END TEST env_vtophys 00:05:17.963 ************************************ 00:05:17.963 01:23:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:17.963 01:23:44 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.963 01:23:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.963 01:23:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 ************************************ 00:05:17.963 START TEST env_pci 00:05:17.963 ************************************ 00:05:17.963 01:23:44 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:17.963 00:05:17.963 00:05:17.963 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.963 http://cunit.sourceforge.net/ 00:05:17.963 00:05:17.963 00:05:17.963 Suite: pci 00:05:17.963 Test: pci_hook ...[2024-07-12 01:23:44.250820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3733758 has claimed it 00:05:17.963 EAL: Cannot find device (10000:00:01.0) 00:05:17.963 EAL: Failed to attach device on primary process 00:05:17.963 passed 00:05:17.963 00:05:17.963 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.963 suites 1 1 n/a 0 0 00:05:17.963 tests 1 1 1 0 0 00:05:17.963 asserts 25 25 25 0 n/a 00:05:17.963 00:05:17.963 Elapsed time = 0.040 seconds 00:05:17.963 00:05:17.963 real 0m0.060s 00:05:17.963 user 0m0.020s 00:05:17.963 sys 0m0.040s 00:05:17.963 01:23:44 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.963 01:23:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 ************************************ 00:05:17.963 END TEST env_pci 00:05:17.963 ************************************ 00:05:18.224 01:23:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:18.224 01:23:44 env -- env/env.sh@15 -- # uname 00:05:18.224 01:23:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:18.224 01:23:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:18.224 01:23:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.224 01:23:44 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:18.224 01:23:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.224 01:23:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.224 ************************************ 00:05:18.224 START TEST env_dpdk_post_init 00:05:18.224 ************************************ 00:05:18.224 01:23:44 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.224 EAL: Detected CPU lcores: 128 00:05:18.224 EAL: Detected NUMA nodes: 2 00:05:18.224 EAL: Detected shared linkage of DPDK 00:05:18.224 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.224 EAL: Selected IOVA mode 'VA' 00:05:18.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.224 EAL: VFIO support initialized 00:05:18.224 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.224 EAL: Using IOMMU type 1 (Type 1) 00:05:18.485 EAL: Ignore mapping IO port bar(1) 00:05:18.485 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:18.485 EAL: Ignore mapping IO port bar(1) 00:05:18.746 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:18.746 EAL: Ignore mapping IO port bar(1) 00:05:19.007 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:19.007 EAL: Ignore mapping IO port bar(1) 00:05:19.268 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:19.268 EAL: Ignore mapping IO port bar(1) 00:05:19.268 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:19.529 EAL: Ignore mapping IO port bar(1) 00:05:19.529 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:19.790 EAL: Ignore mapping IO port bar(1) 00:05:19.790 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:20.051 EAL: Ignore mapping IO port bar(1) 00:05:20.051 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:20.312 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:20.312 EAL: Ignore mapping IO port bar(1) 00:05:20.574 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:20.574 EAL: Ignore mapping IO port bar(1) 00:05:20.836 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:20.836 EAL: Ignore mapping IO port bar(1) 00:05:20.836 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:21.096 EAL: Ignore mapping IO port bar(1) 00:05:21.096 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:21.357 EAL: Ignore mapping IO port bar(1) 00:05:21.357 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:21.618 EAL: Ignore mapping IO port bar(1) 00:05:21.619 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:21.619 EAL: Ignore mapping IO port bar(1) 00:05:21.880 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:21.880 EAL: Ignore mapping IO port bar(1) 00:05:22.141 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:22.141 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:22.141 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:22.141 Starting DPDK initialization... 00:05:22.141 Starting SPDK post initialization... 00:05:22.141 SPDK NVMe probe 00:05:22.141 Attaching to 0000:65:00.0 00:05:22.141 Attached to 0000:65:00.0 00:05:22.141 Cleaning up... 00:05:24.054 00:05:24.054 real 0m5.715s 00:05:24.054 user 0m0.178s 00:05:24.054 sys 0m0.083s 00:05:24.054 01:23:50 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.054 01:23:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.054 ************************************ 00:05:24.054 END TEST env_dpdk_post_init 00:05:24.054 ************************************ 00:05:24.054 01:23:50 env -- env/env.sh@26 -- # uname 00:05:24.054 01:23:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.055 01:23:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.055 01:23:50 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.055 01:23:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.055 01:23:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.055 ************************************ 00:05:24.055 START TEST env_mem_callbacks 00:05:24.055 ************************************ 00:05:24.055 01:23:50 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.055 EAL: Detected CPU lcores: 128 00:05:24.055 EAL: Detected NUMA nodes: 2 00:05:24.055 EAL: Detected shared linkage of DPDK 00:05:24.055 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.055 EAL: Selected IOVA mode 'VA' 00:05:24.055 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.055 EAL: VFIO support initialized 00:05:24.055 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.055 00:05:24.055 00:05:24.055 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.055 http://cunit.sourceforge.net/ 00:05:24.055 00:05:24.055 00:05:24.055 Suite: memory 00:05:24.055 Test: test ... 00:05:24.055 register 0x200000200000 2097152 00:05:24.055 malloc 3145728 00:05:24.055 register 0x200000400000 4194304 00:05:24.055 buf 0x200000500000 len 3145728 PASSED 00:05:24.055 malloc 64 00:05:24.055 buf 0x2000004fff40 len 64 PASSED 00:05:24.055 malloc 4194304 00:05:24.055 register 0x200000800000 6291456 00:05:24.055 buf 0x200000a00000 len 4194304 PASSED 00:05:24.055 free 0x200000500000 3145728 00:05:24.055 free 0x2000004fff40 64 00:05:24.055 unregister 0x200000400000 4194304 PASSED 00:05:24.055 free 0x200000a00000 4194304 00:05:24.055 unregister 0x200000800000 6291456 PASSED 00:05:24.055 malloc 8388608 00:05:24.055 register 0x200000400000 10485760 00:05:24.055 buf 0x200000600000 len 8388608 PASSED 00:05:24.055 free 0x200000600000 8388608 00:05:24.055 unregister 0x200000400000 10485760 PASSED 00:05:24.055 passed 00:05:24.055 00:05:24.055 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.055 suites 1 1 n/a 0 0 00:05:24.055 tests 1 1 1 0 0 00:05:24.055 asserts 15 15 15 0 n/a 00:05:24.055 00:05:24.055 Elapsed time = 0.007 seconds 00:05:24.055 00:05:24.055 real 0m0.066s 00:05:24.055 user 0m0.021s 00:05:24.055 sys 0m0.045s 00:05:24.055 01:23:50 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.055 01:23:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:24.055 ************************************ 00:05:24.055 END TEST env_mem_callbacks 00:05:24.055 ************************************ 00:05:24.055 00:05:24.055 real 0m7.298s 00:05:24.055 user 0m0.977s 00:05:24.055 sys 0m0.861s 00:05:24.055 01:23:50 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.055 01:23:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.055 ************************************ 00:05:24.055 END TEST env 00:05:24.055 ************************************ 00:05:24.055 01:23:50 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.055 01:23:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.055 01:23:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.055 01:23:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.055 ************************************ 00:05:24.055 START TEST rpc 00:05:24.055 ************************************ 00:05:24.055 01:23:50 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.315 * Looking for test storage... 00:05:24.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:24.315 01:23:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3735204 00:05:24.315 01:23:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.315 01:23:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:24.315 01:23:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3735204 00:05:24.315 01:23:50 rpc -- common/autotest_common.sh@827 -- # '[' -z 3735204 ']' 00:05:24.315 01:23:50 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.315 01:23:50 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.315 01:23:50 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.315 01:23:50 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.315 01:23:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.315 [2024-07-12 01:23:50.516319] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:24.315 [2024-07-12 01:23:50.516389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735204 ] 00:05:24.315 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.315 [2024-07-12 01:23:50.590555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.315 [2024-07-12 01:23:50.628529] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.315 [2024-07-12 01:23:50.628576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3735204' to capture a snapshot of events at runtime. 00:05:24.315 [2024-07-12 01:23:50.628583] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.315 [2024-07-12 01:23:50.628590] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.315 [2024-07-12 01:23:50.628595] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3735204 for offline analysis/debug. 00:05:24.316 [2024-07-12 01:23:50.628617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.257 01:23:51 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.257 01:23:51 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:25.257 01:23:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.257 01:23:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.257 01:23:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.257 01:23:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.257 01:23:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.257 01:23:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.257 01:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.257 ************************************ 00:05:25.257 START TEST rpc_integrity 00:05:25.257 ************************************ 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.257 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.257 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.257 { 00:05:25.257 "name": "Malloc0", 00:05:25.257 "aliases": [ 00:05:25.257 "6e595881-8c98-4d24-ac38-cf62b7b5959a" 00:05:25.257 ], 00:05:25.257 "product_name": "Malloc disk", 00:05:25.257 "block_size": 512, 00:05:25.257 "num_blocks": 16384, 00:05:25.257 "uuid": "6e595881-8c98-4d24-ac38-cf62b7b5959a", 00:05:25.257 "assigned_rate_limits": { 00:05:25.257 "rw_ios_per_sec": 0, 00:05:25.257 "rw_mbytes_per_sec": 0, 00:05:25.257 "r_mbytes_per_sec": 0, 00:05:25.257 "w_mbytes_per_sec": 0 00:05:25.257 }, 00:05:25.257 "claimed": false, 00:05:25.257 "zoned": false, 00:05:25.257 "supported_io_types": { 00:05:25.257 "read": true, 00:05:25.257 "write": true, 00:05:25.257 "unmap": true, 00:05:25.257 "write_zeroes": true, 00:05:25.257 "flush": true, 00:05:25.257 "reset": true, 00:05:25.257 "compare": false, 00:05:25.257 "compare_and_write": false, 00:05:25.257 "abort": true, 00:05:25.257 "nvme_admin": false, 00:05:25.258 "nvme_io": false 00:05:25.258 }, 00:05:25.258 "memory_domains": [ 00:05:25.258 { 00:05:25.258 "dma_device_id": "system", 00:05:25.258 "dma_device_type": 1 00:05:25.258 }, 00:05:25.258 { 00:05:25.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.258 "dma_device_type": 2 00:05:25.258 } 00:05:25.258 ], 00:05:25.258 "driver_specific": {} 00:05:25.258 } 00:05:25.258 ]' 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 [2024-07-12 01:23:51.471673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.258 [2024-07-12 01:23:51.471707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.258 [2024-07-12 01:23:51.471719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1abb490 00:05:25.258 [2024-07-12 01:23:51.471725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.258 [2024-07-12 01:23:51.473061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.258 [2024-07-12 01:23:51.473082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.258 Passthru0 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.258 { 00:05:25.258 "name": "Malloc0", 00:05:25.258 "aliases": [ 00:05:25.258 "6e595881-8c98-4d24-ac38-cf62b7b5959a" 00:05:25.258 ], 00:05:25.258 "product_name": "Malloc disk", 00:05:25.258 "block_size": 512, 00:05:25.258 "num_blocks": 16384, 00:05:25.258 "uuid": "6e595881-8c98-4d24-ac38-cf62b7b5959a", 00:05:25.258 "assigned_rate_limits": { 00:05:25.258 "rw_ios_per_sec": 0, 00:05:25.258 "rw_mbytes_per_sec": 0, 00:05:25.258 "r_mbytes_per_sec": 0, 00:05:25.258 "w_mbytes_per_sec": 0 00:05:25.258 }, 00:05:25.258 "claimed": true, 00:05:25.258 "claim_type": "exclusive_write", 00:05:25.258 "zoned": false, 00:05:25.258 "supported_io_types": { 00:05:25.258 "read": true, 00:05:25.258 "write": true, 00:05:25.258 "unmap": true, 00:05:25.258 "write_zeroes": true, 00:05:25.258 "flush": true, 00:05:25.258 "reset": true, 00:05:25.258 "compare": false, 00:05:25.258 "compare_and_write": false, 00:05:25.258 "abort": true, 00:05:25.258 "nvme_admin": false, 00:05:25.258 "nvme_io": false 00:05:25.258 }, 00:05:25.258 "memory_domains": [ 00:05:25.258 { 00:05:25.258 "dma_device_id": "system", 00:05:25.258 "dma_device_type": 1 00:05:25.258 }, 00:05:25.258 { 00:05:25.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.258 "dma_device_type": 2 00:05:25.258 } 00:05:25.258 ], 00:05:25.258 "driver_specific": {} 00:05:25.258 }, 00:05:25.258 { 00:05:25.258 "name": "Passthru0", 00:05:25.258 "aliases": [ 00:05:25.258 "7b2158d8-f1d5-5348-b705-3803b8de4cf5" 00:05:25.258 ], 00:05:25.258 "product_name": "passthru", 00:05:25.258 "block_size": 512, 00:05:25.258 "num_blocks": 16384, 00:05:25.258 "uuid": "7b2158d8-f1d5-5348-b705-3803b8de4cf5", 00:05:25.258 "assigned_rate_limits": { 00:05:25.258 "rw_ios_per_sec": 0, 00:05:25.258 "rw_mbytes_per_sec": 0, 00:05:25.258 "r_mbytes_per_sec": 0, 00:05:25.258 "w_mbytes_per_sec": 0 00:05:25.258 }, 00:05:25.258 "claimed": false, 00:05:25.258 "zoned": false, 00:05:25.258 "supported_io_types": { 00:05:25.258 "read": true, 00:05:25.258 "write": true, 00:05:25.258 "unmap": true, 00:05:25.258 "write_zeroes": true, 00:05:25.258 "flush": true, 00:05:25.258 "reset": true, 00:05:25.258 "compare": false, 00:05:25.258 "compare_and_write": false, 00:05:25.258 "abort": true, 00:05:25.258 "nvme_admin": false, 00:05:25.258 "nvme_io": false 00:05:25.258 }, 00:05:25.258 "memory_domains": [ 00:05:25.258 { 00:05:25.258 "dma_device_id": "system", 00:05:25.258 "dma_device_type": 1 00:05:25.258 }, 00:05:25.258 { 00:05:25.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.258 "dma_device_type": 2 00:05:25.258 } 00:05:25.258 ], 00:05:25.258 "driver_specific": { 00:05:25.258 "passthru": { 00:05:25.258 "name": "Passthru0", 00:05:25.258 "base_bdev_name": "Malloc0" 00:05:25.258 } 00:05:25.258 } 00:05:25.258 } 00:05:25.258 ]' 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.258 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.519 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.519 00:05:25.519 real 0m0.295s 00:05:25.519 user 0m0.181s 00:05:25.519 sys 0m0.045s 00:05:25.519 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.519 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 ************************************ 00:05:25.519 END TEST rpc_integrity 00:05:25.519 ************************************ 00:05:25.519 01:23:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.519 01:23:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.519 01:23:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.519 01:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 ************************************ 00:05:25.519 START TEST rpc_plugins 00:05:25.519 ************************************ 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.519 { 00:05:25.519 "name": "Malloc1", 00:05:25.519 "aliases": [ 00:05:25.519 "1eade0ba-46c7-4a60-b62e-ed4d5e1dafd9" 00:05:25.519 ], 00:05:25.519 "product_name": "Malloc disk", 00:05:25.519 "block_size": 4096, 00:05:25.519 "num_blocks": 256, 00:05:25.519 "uuid": "1eade0ba-46c7-4a60-b62e-ed4d5e1dafd9", 00:05:25.519 "assigned_rate_limits": { 00:05:25.519 "rw_ios_per_sec": 0, 00:05:25.519 "rw_mbytes_per_sec": 0, 00:05:25.519 "r_mbytes_per_sec": 0, 00:05:25.519 "w_mbytes_per_sec": 0 00:05:25.519 }, 00:05:25.519 "claimed": false, 00:05:25.519 "zoned": false, 00:05:25.519 "supported_io_types": { 00:05:25.519 "read": true, 00:05:25.519 "write": true, 00:05:25.519 "unmap": true, 00:05:25.519 "write_zeroes": true, 00:05:25.519 "flush": true, 00:05:25.519 "reset": true, 00:05:25.519 "compare": false, 00:05:25.519 "compare_and_write": false, 00:05:25.519 "abort": true, 00:05:25.519 "nvme_admin": false, 00:05:25.519 "nvme_io": false 00:05:25.519 }, 00:05:25.519 "memory_domains": [ 00:05:25.519 { 00:05:25.519 "dma_device_id": "system", 00:05:25.519 "dma_device_type": 1 00:05:25.519 }, 00:05:25.519 { 00:05:25.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.519 "dma_device_type": 2 00:05:25.519 } 00:05:25.519 ], 00:05:25.519 "driver_specific": {} 00:05:25.519 } 00:05:25.519 ]' 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:25.519 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.519 00:05:25.519 real 0m0.151s 00:05:25.519 user 0m0.096s 00:05:25.519 sys 0m0.018s 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.519 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.519 ************************************ 00:05:25.519 END TEST rpc_plugins 00:05:25.519 ************************************ 00:05:25.780 01:23:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.780 01:23:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.780 01:23:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.780 01:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.780 ************************************ 00:05:25.780 START TEST rpc_trace_cmd_test 00:05:25.780 ************************************ 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:25.780 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3735204", 00:05:25.780 "tpoint_group_mask": "0x8", 00:05:25.780 "iscsi_conn": { 00:05:25.780 "mask": "0x2", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "scsi": { 00:05:25.780 "mask": "0x4", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "bdev": { 00:05:25.780 "mask": "0x8", 00:05:25.780 "tpoint_mask": "0xffffffffffffffff" 00:05:25.780 }, 00:05:25.780 "nvmf_rdma": { 00:05:25.780 "mask": "0x10", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "nvmf_tcp": { 00:05:25.780 "mask": "0x20", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "ftl": { 00:05:25.780 "mask": "0x40", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "blobfs": { 00:05:25.780 "mask": "0x80", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "dsa": { 00:05:25.780 "mask": "0x200", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "thread": { 00:05:25.780 "mask": "0x400", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "nvme_pcie": { 00:05:25.780 "mask": "0x800", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "iaa": { 00:05:25.780 "mask": "0x1000", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "nvme_tcp": { 00:05:25.780 "mask": "0x2000", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "bdev_nvme": { 00:05:25.780 "mask": "0x4000", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 }, 00:05:25.780 "sock": { 00:05:25.780 "mask": "0x8000", 00:05:25.780 "tpoint_mask": "0x0" 00:05:25.780 } 00:05:25.780 }' 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:25.780 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:25.780 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:25.780 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:25.780 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:25.780 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:25.780 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:25.780 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.041 01:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.041 00:05:26.041 real 0m0.228s 00:05:26.041 user 0m0.195s 00:05:26.041 sys 0m0.026s 00:05:26.041 01:23:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.041 01:23:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 ************************************ 00:05:26.041 END TEST rpc_trace_cmd_test 00:05:26.041 ************************************ 00:05:26.041 01:23:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.041 01:23:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.041 01:23:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.041 01:23:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.041 01:23:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.041 01:23:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 ************************************ 00:05:26.041 START TEST rpc_daemon_integrity 00:05:26.041 ************************************ 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.041 { 00:05:26.041 "name": "Malloc2", 00:05:26.041 "aliases": [ 00:05:26.041 "0cc52608-1318-44ef-9ee4-f964c176048d" 00:05:26.041 ], 00:05:26.041 "product_name": "Malloc disk", 00:05:26.041 "block_size": 512, 00:05:26.041 "num_blocks": 16384, 00:05:26.041 "uuid": "0cc52608-1318-44ef-9ee4-f964c176048d", 00:05:26.041 "assigned_rate_limits": { 00:05:26.041 "rw_ios_per_sec": 0, 00:05:26.041 "rw_mbytes_per_sec": 0, 00:05:26.041 "r_mbytes_per_sec": 0, 00:05:26.041 "w_mbytes_per_sec": 0 00:05:26.041 }, 00:05:26.041 "claimed": false, 00:05:26.041 "zoned": false, 00:05:26.041 "supported_io_types": { 00:05:26.041 "read": true, 00:05:26.041 "write": true, 00:05:26.041 "unmap": true, 00:05:26.041 "write_zeroes": true, 00:05:26.041 "flush": true, 00:05:26.041 "reset": true, 00:05:26.041 "compare": false, 00:05:26.041 "compare_and_write": false, 00:05:26.041 "abort": true, 00:05:26.041 "nvme_admin": false, 00:05:26.041 "nvme_io": false 00:05:26.041 }, 00:05:26.041 "memory_domains": [ 00:05:26.041 { 00:05:26.041 "dma_device_id": "system", 00:05:26.041 "dma_device_type": 1 00:05:26.041 }, 00:05:26.041 { 00:05:26.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.041 "dma_device_type": 2 00:05:26.041 } 00:05:26.041 ], 00:05:26.041 "driver_specific": {} 00:05:26.041 } 00:05:26.041 ]' 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 [2024-07-12 01:23:52.362067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.041 [2024-07-12 01:23:52.362097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.041 [2024-07-12 01:23:52.362112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1abc9c0 00:05:26.041 [2024-07-12 01:23:52.362123] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.041 [2024-07-12 01:23:52.363325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.041 [2024-07-12 01:23:52.363345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.041 Passthru0 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.041 { 00:05:26.041 "name": "Malloc2", 00:05:26.041 "aliases": [ 00:05:26.041 "0cc52608-1318-44ef-9ee4-f964c176048d" 00:05:26.041 ], 00:05:26.041 "product_name": "Malloc disk", 00:05:26.041 "block_size": 512, 00:05:26.041 "num_blocks": 16384, 00:05:26.041 "uuid": "0cc52608-1318-44ef-9ee4-f964c176048d", 00:05:26.041 "assigned_rate_limits": { 00:05:26.041 "rw_ios_per_sec": 0, 00:05:26.041 "rw_mbytes_per_sec": 0, 00:05:26.041 "r_mbytes_per_sec": 0, 00:05:26.041 "w_mbytes_per_sec": 0 00:05:26.041 }, 00:05:26.041 "claimed": true, 00:05:26.041 "claim_type": "exclusive_write", 00:05:26.041 "zoned": false, 00:05:26.041 "supported_io_types": { 00:05:26.041 "read": true, 00:05:26.041 "write": true, 00:05:26.041 "unmap": true, 00:05:26.041 "write_zeroes": true, 00:05:26.041 "flush": true, 00:05:26.041 "reset": true, 00:05:26.041 "compare": false, 00:05:26.041 "compare_and_write": false, 00:05:26.041 "abort": true, 00:05:26.041 "nvme_admin": false, 00:05:26.041 "nvme_io": false 00:05:26.041 }, 00:05:26.041 "memory_domains": [ 00:05:26.041 { 00:05:26.041 "dma_device_id": "system", 00:05:26.041 "dma_device_type": 1 00:05:26.041 }, 00:05:26.041 { 00:05:26.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.041 "dma_device_type": 2 00:05:26.041 } 00:05:26.041 ], 00:05:26.041 "driver_specific": {} 00:05:26.041 }, 00:05:26.041 { 00:05:26.041 "name": "Passthru0", 00:05:26.041 "aliases": [ 00:05:26.041 "2b202d88-2505-5318-9624-8c836cf717f6" 00:05:26.041 ], 00:05:26.041 "product_name": "passthru", 00:05:26.041 "block_size": 512, 00:05:26.041 "num_blocks": 16384, 00:05:26.041 "uuid": "2b202d88-2505-5318-9624-8c836cf717f6", 00:05:26.041 "assigned_rate_limits": { 00:05:26.041 "rw_ios_per_sec": 0, 00:05:26.041 "rw_mbytes_per_sec": 0, 00:05:26.041 "r_mbytes_per_sec": 0, 00:05:26.041 "w_mbytes_per_sec": 0 00:05:26.041 }, 00:05:26.041 "claimed": false, 00:05:26.041 "zoned": false, 00:05:26.041 "supported_io_types": { 00:05:26.041 "read": true, 00:05:26.041 "write": true, 00:05:26.041 "unmap": true, 00:05:26.041 "write_zeroes": true, 00:05:26.041 "flush": true, 00:05:26.041 "reset": true, 00:05:26.041 "compare": false, 00:05:26.041 "compare_and_write": false, 00:05:26.041 "abort": true, 00:05:26.041 "nvme_admin": false, 00:05:26.041 "nvme_io": false 00:05:26.041 }, 00:05:26.041 "memory_domains": [ 00:05:26.041 { 00:05:26.041 "dma_device_id": "system", 00:05:26.041 "dma_device_type": 1 00:05:26.041 }, 00:05:26.041 { 00:05:26.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.041 "dma_device_type": 2 00:05:26.041 } 00:05:26.041 ], 00:05:26.041 "driver_specific": { 00:05:26.041 "passthru": { 00:05:26.041 "name": "Passthru0", 00:05:26.041 "base_bdev_name": "Malloc2" 00:05:26.041 } 00:05:26.041 } 00:05:26.041 } 00:05:26.041 ]' 00:05:26.041 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.303 00:05:26.303 real 0m0.289s 00:05:26.303 user 0m0.179s 00:05:26.303 sys 0m0.041s 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.303 01:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.303 ************************************ 00:05:26.303 END TEST rpc_daemon_integrity 00:05:26.303 ************************************ 00:05:26.303 01:23:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.303 01:23:52 rpc -- rpc/rpc.sh@84 -- # killprocess 3735204 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@946 -- # '[' -z 3735204 ']' 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@950 -- # kill -0 3735204 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@951 -- # uname 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3735204 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3735204' 00:05:26.303 killing process with pid 3735204 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@965 -- # kill 3735204 00:05:26.303 01:23:52 rpc -- common/autotest_common.sh@970 -- # wait 3735204 00:05:26.563 00:05:26.563 real 0m2.431s 00:05:26.563 user 0m3.199s 00:05:26.563 sys 0m0.675s 00:05:26.563 01:23:52 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.563 01:23:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.563 ************************************ 00:05:26.563 END TEST rpc 00:05:26.563 ************************************ 00:05:26.563 01:23:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:26.563 01:23:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.563 01:23:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.563 01:23:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.563 ************************************ 00:05:26.563 START TEST skip_rpc 00:05:26.563 ************************************ 00:05:26.563 01:23:52 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:26.823 * Looking for test storage... 00:05:26.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.823 01:23:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.823 01:23:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.823 01:23:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:26.823 01:23:52 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.823 01:23:52 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.823 01:23:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.823 ************************************ 00:05:26.823 START TEST skip_rpc 00:05:26.823 ************************************ 00:05:26.823 01:23:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:26.823 01:23:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3735732 00:05:26.823 01:23:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.823 01:23:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:26.823 01:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:26.823 [2024-07-12 01:23:53.055601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:26.823 [2024-07-12 01:23:53.055667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735732 ] 00:05:26.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.823 [2024-07-12 01:23:53.132184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.823 [2024-07-12 01:23:53.170534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3735732 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3735732 ']' 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3735732 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3735732 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3735732' 00:05:32.104 killing process with pid 3735732 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3735732 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3735732 00:05:32.104 00:05:32.104 real 0m5.264s 00:05:32.104 user 0m5.042s 00:05:32.104 sys 0m0.254s 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.104 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.104 ************************************ 00:05:32.104 END TEST skip_rpc 00:05:32.104 ************************************ 00:05:32.104 01:23:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:32.104 01:23:58 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.104 01:23:58 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.104 01:23:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.104 ************************************ 00:05:32.104 START TEST skip_rpc_with_json 00:05:32.104 ************************************ 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3736906 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3736906 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3736906 ']' 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:32.104 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.104 [2024-07-12 01:23:58.395085] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:32.104 [2024-07-12 01:23:58.395141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736906 ] 00:05:32.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.364 [2024-07-12 01:23:58.464622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.364 [2024-07-12 01:23:58.501926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.934 [2024-07-12 01:23:59.156632] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:32.934 request: 00:05:32.934 { 00:05:32.934 "trtype": "tcp", 00:05:32.934 "method": "nvmf_get_transports", 00:05:32.934 "req_id": 1 00:05:32.934 } 00:05:32.934 Got JSON-RPC error response 00:05:32.934 response: 00:05:32.934 { 00:05:32.934 "code": -19, 00:05:32.934 "message": "No such device" 00:05:32.934 } 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.934 [2024-07-12 01:23:59.168745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.934 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.194 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.194 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.194 { 00:05:33.194 "subsystems": [ 00:05:33.194 { 00:05:33.194 "subsystem": "vfio_user_target", 00:05:33.194 "config": null 00:05:33.194 }, 00:05:33.194 { 00:05:33.194 "subsystem": "keyring", 00:05:33.194 "config": [] 00:05:33.194 }, 00:05:33.194 { 00:05:33.194 "subsystem": "iobuf", 00:05:33.194 "config": [ 00:05:33.194 { 00:05:33.194 "method": "iobuf_set_options", 00:05:33.194 "params": { 00:05:33.194 "small_pool_count": 8192, 00:05:33.194 "large_pool_count": 1024, 00:05:33.194 "small_bufsize": 8192, 00:05:33.194 "large_bufsize": 135168 00:05:33.194 } 00:05:33.194 } 00:05:33.194 ] 00:05:33.194 }, 00:05:33.194 { 00:05:33.194 "subsystem": "sock", 00:05:33.194 "config": [ 00:05:33.194 { 00:05:33.194 "method": "sock_set_default_impl", 00:05:33.194 "params": { 00:05:33.195 "impl_name": "posix" 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "sock_impl_set_options", 00:05:33.195 "params": { 00:05:33.195 "impl_name": "ssl", 00:05:33.195 "recv_buf_size": 4096, 00:05:33.195 "send_buf_size": 4096, 00:05:33.195 "enable_recv_pipe": true, 00:05:33.195 "enable_quickack": false, 00:05:33.195 "enable_placement_id": 0, 00:05:33.195 "enable_zerocopy_send_server": true, 00:05:33.195 "enable_zerocopy_send_client": false, 00:05:33.195 "zerocopy_threshold": 0, 00:05:33.195 "tls_version": 0, 00:05:33.195 "enable_ktls": false 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "sock_impl_set_options", 00:05:33.195 "params": { 00:05:33.195 "impl_name": "posix", 00:05:33.195 "recv_buf_size": 2097152, 00:05:33.195 "send_buf_size": 2097152, 00:05:33.195 "enable_recv_pipe": true, 00:05:33.195 "enable_quickack": false, 00:05:33.195 "enable_placement_id": 0, 00:05:33.195 "enable_zerocopy_send_server": true, 00:05:33.195 "enable_zerocopy_send_client": false, 00:05:33.195 "zerocopy_threshold": 0, 00:05:33.195 "tls_version": 0, 00:05:33.195 "enable_ktls": false 00:05:33.195 } 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "vmd", 00:05:33.195 "config": [] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "accel", 00:05:33.195 "config": [ 00:05:33.195 { 00:05:33.195 "method": "accel_set_options", 00:05:33.195 "params": { 00:05:33.195 "small_cache_size": 128, 00:05:33.195 "large_cache_size": 16, 00:05:33.195 "task_count": 2048, 00:05:33.195 "sequence_count": 2048, 00:05:33.195 "buf_count": 2048 00:05:33.195 } 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "bdev", 00:05:33.195 "config": [ 00:05:33.195 { 00:05:33.195 "method": "bdev_set_options", 00:05:33.195 "params": { 00:05:33.195 "bdev_io_pool_size": 65535, 00:05:33.195 "bdev_io_cache_size": 256, 00:05:33.195 "bdev_auto_examine": true, 00:05:33.195 "iobuf_small_cache_size": 128, 00:05:33.195 "iobuf_large_cache_size": 16 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "bdev_raid_set_options", 00:05:33.195 "params": { 00:05:33.195 "process_window_size_kb": 1024 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "bdev_iscsi_set_options", 00:05:33.195 "params": { 00:05:33.195 "timeout_sec": 30 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "bdev_nvme_set_options", 00:05:33.195 "params": { 00:05:33.195 "action_on_timeout": "none", 00:05:33.195 "timeout_us": 0, 00:05:33.195 "timeout_admin_us": 0, 00:05:33.195 "keep_alive_timeout_ms": 10000, 00:05:33.195 "arbitration_burst": 0, 00:05:33.195 "low_priority_weight": 0, 00:05:33.195 "medium_priority_weight": 0, 00:05:33.195 "high_priority_weight": 0, 00:05:33.195 "nvme_adminq_poll_period_us": 10000, 00:05:33.195 "nvme_ioq_poll_period_us": 0, 00:05:33.195 "io_queue_requests": 0, 00:05:33.195 "delay_cmd_submit": true, 00:05:33.195 "transport_retry_count": 4, 00:05:33.195 "bdev_retry_count": 3, 00:05:33.195 "transport_ack_timeout": 0, 00:05:33.195 "ctrlr_loss_timeout_sec": 0, 00:05:33.195 "reconnect_delay_sec": 0, 00:05:33.195 "fast_io_fail_timeout_sec": 0, 00:05:33.195 "disable_auto_failback": false, 00:05:33.195 "generate_uuids": false, 00:05:33.195 "transport_tos": 0, 00:05:33.195 "nvme_error_stat": false, 00:05:33.195 "rdma_srq_size": 0, 00:05:33.195 "io_path_stat": false, 00:05:33.195 "allow_accel_sequence": false, 00:05:33.195 "rdma_max_cq_size": 0, 00:05:33.195 "rdma_cm_event_timeout_ms": 0, 00:05:33.195 "dhchap_digests": [ 00:05:33.195 "sha256", 00:05:33.195 "sha384", 00:05:33.195 "sha512" 00:05:33.195 ], 00:05:33.195 "dhchap_dhgroups": [ 00:05:33.195 "null", 00:05:33.195 "ffdhe2048", 00:05:33.195 "ffdhe3072", 00:05:33.195 "ffdhe4096", 00:05:33.195 "ffdhe6144", 00:05:33.195 "ffdhe8192" 00:05:33.195 ] 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "bdev_nvme_set_hotplug", 00:05:33.195 "params": { 00:05:33.195 "period_us": 100000, 00:05:33.195 "enable": false 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "bdev_wait_for_examine" 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "scsi", 00:05:33.195 "config": null 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "scheduler", 00:05:33.195 "config": [ 00:05:33.195 { 00:05:33.195 "method": "framework_set_scheduler", 00:05:33.195 "params": { 00:05:33.195 "name": "static" 00:05:33.195 } 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "vhost_scsi", 00:05:33.195 "config": [] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "vhost_blk", 00:05:33.195 "config": [] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "ublk", 00:05:33.195 "config": [] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "nbd", 00:05:33.195 "config": [] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "nvmf", 00:05:33.195 "config": [ 00:05:33.195 { 00:05:33.195 "method": "nvmf_set_config", 00:05:33.195 "params": { 00:05:33.195 "discovery_filter": "match_any", 00:05:33.195 "admin_cmd_passthru": { 00:05:33.195 "identify_ctrlr": false 00:05:33.195 } 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "nvmf_set_max_subsystems", 00:05:33.195 "params": { 00:05:33.195 "max_subsystems": 1024 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "nvmf_set_crdt", 00:05:33.195 "params": { 00:05:33.195 "crdt1": 0, 00:05:33.195 "crdt2": 0, 00:05:33.195 "crdt3": 0 00:05:33.195 } 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "method": "nvmf_create_transport", 00:05:33.195 "params": { 00:05:33.195 "trtype": "TCP", 00:05:33.195 "max_queue_depth": 128, 00:05:33.195 "max_io_qpairs_per_ctrlr": 127, 00:05:33.195 "in_capsule_data_size": 4096, 00:05:33.195 "max_io_size": 131072, 00:05:33.195 "io_unit_size": 131072, 00:05:33.195 "max_aq_depth": 128, 00:05:33.195 "num_shared_buffers": 511, 00:05:33.195 "buf_cache_size": 4294967295, 00:05:33.195 "dif_insert_or_strip": false, 00:05:33.195 "zcopy": false, 00:05:33.195 "c2h_success": true, 00:05:33.195 "sock_priority": 0, 00:05:33.195 "abort_timeout_sec": 1, 00:05:33.195 "ack_timeout": 0, 00:05:33.195 "data_wr_pool_size": 0 00:05:33.195 } 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 }, 00:05:33.195 { 00:05:33.195 "subsystem": "iscsi", 00:05:33.195 "config": [ 00:05:33.195 { 00:05:33.195 "method": "iscsi_set_options", 00:05:33.195 "params": { 00:05:33.195 "node_base": "iqn.2016-06.io.spdk", 00:05:33.195 "max_sessions": 128, 00:05:33.195 "max_connections_per_session": 2, 00:05:33.195 "max_queue_depth": 64, 00:05:33.195 "default_time2wait": 2, 00:05:33.195 "default_time2retain": 20, 00:05:33.195 "first_burst_length": 8192, 00:05:33.195 "immediate_data": true, 00:05:33.195 "allow_duplicated_isid": false, 00:05:33.195 "error_recovery_level": 0, 00:05:33.195 "nop_timeout": 60, 00:05:33.195 "nop_in_interval": 30, 00:05:33.195 "disable_chap": false, 00:05:33.195 "require_chap": false, 00:05:33.195 "mutual_chap": false, 00:05:33.195 "chap_group": 0, 00:05:33.195 "max_large_datain_per_connection": 64, 00:05:33.195 "max_r2t_per_connection": 4, 00:05:33.195 "pdu_pool_size": 36864, 00:05:33.195 "immediate_data_pool_size": 16384, 00:05:33.195 "data_out_pool_size": 2048 00:05:33.195 } 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 } 00:05:33.195 ] 00:05:33.195 } 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3736906 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3736906 ']' 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3736906 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3736906 00:05:33.195 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.196 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.196 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3736906' 00:05:33.196 killing process with pid 3736906 00:05:33.196 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3736906 00:05:33.196 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3736906 00:05:33.455 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3737104 00:05:33.455 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:33.455 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3737104 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3737104 ']' 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3737104 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3737104 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3737104' 00:05:38.752 killing process with pid 3737104 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3737104 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3737104 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:38.752 00:05:38.752 real 0m6.511s 00:05:38.752 user 0m6.372s 00:05:38.752 sys 0m0.541s 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.752 ************************************ 00:05:38.752 END TEST skip_rpc_with_json 00:05:38.752 ************************************ 00:05:38.752 01:24:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:38.752 01:24:04 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.752 01:24:04 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.752 01:24:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.752 ************************************ 00:05:38.752 START TEST skip_rpc_with_delay 00:05:38.752 ************************************ 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.752 [2024-07-12 01:24:04.984784] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:38.752 [2024-07-12 01:24:04.984870] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.752 00:05:38.752 real 0m0.074s 00:05:38.752 user 0m0.046s 00:05:38.752 sys 0m0.027s 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.752 01:24:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:38.752 ************************************ 00:05:38.752 END TEST skip_rpc_with_delay 00:05:38.753 ************************************ 00:05:38.753 01:24:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:38.753 01:24:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:38.753 01:24:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:38.753 01:24:05 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.753 01:24:05 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.753 01:24:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.753 ************************************ 00:05:38.753 START TEST exit_on_failed_rpc_init 00:05:38.753 ************************************ 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3738419 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3738419 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3738419 ']' 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.753 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.018 [2024-07-12 01:24:05.143353] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:39.018 [2024-07-12 01:24:05.143419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738419 ] 00:05:39.018 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.018 [2024-07-12 01:24:05.214527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.018 [2024-07-12 01:24:05.254362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.670 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.670 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:39.670 01:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.670 01:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.670 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:39.670 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.671 01:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.671 [2024-07-12 01:24:05.955921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:39.671 [2024-07-12 01:24:05.955974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738610 ] 00:05:39.671 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.932 [2024-07-12 01:24:06.036937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.932 [2024-07-12 01:24:06.067972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.932 [2024-07-12 01:24:06.068036] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:39.932 [2024-07-12 01:24:06.068045] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:39.932 [2024-07-12 01:24:06.068052] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3738419 00:05:39.932 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3738419 ']' 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3738419 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3738419 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3738419' 00:05:39.933 killing process with pid 3738419 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3738419 00:05:39.933 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3738419 00:05:40.194 00:05:40.194 real 0m1.283s 00:05:40.194 user 0m1.453s 00:05:40.194 sys 0m0.383s 00:05:40.194 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.194 01:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.194 ************************************ 00:05:40.194 END TEST exit_on_failed_rpc_init 00:05:40.194 ************************************ 00:05:40.194 01:24:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.194 00:05:40.194 real 0m13.540s 00:05:40.194 user 0m13.060s 00:05:40.194 sys 0m1.490s 00:05:40.194 01:24:06 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.194 01:24:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.194 ************************************ 00:05:40.194 END TEST skip_rpc 00:05:40.194 ************************************ 00:05:40.194 01:24:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.194 01:24:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.194 01:24:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.194 01:24:06 -- common/autotest_common.sh@10 -- # set +x 00:05:40.194 ************************************ 00:05:40.194 START TEST rpc_client 00:05:40.194 ************************************ 00:05:40.194 01:24:06 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.455 * Looking for test storage... 00:05:40.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:40.455 01:24:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:40.455 OK 00:05:40.455 01:24:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.455 00:05:40.455 real 0m0.126s 00:05:40.455 user 0m0.053s 00:05:40.455 sys 0m0.078s 00:05:40.455 01:24:06 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.455 01:24:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.455 ************************************ 00:05:40.455 END TEST rpc_client 00:05:40.455 ************************************ 00:05:40.455 01:24:06 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:40.455 01:24:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.455 01:24:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.455 01:24:06 -- common/autotest_common.sh@10 -- # set +x 00:05:40.455 ************************************ 00:05:40.455 START TEST json_config 00:05:40.455 ************************************ 00:05:40.455 01:24:06 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:40.455 01:24:06 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.455 01:24:06 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.455 01:24:06 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.455 01:24:06 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.455 01:24:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.455 01:24:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.455 01:24:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.455 01:24:06 json_config -- paths/export.sh@5 -- # export PATH 00:05:40.455 01:24:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@47 -- # : 0 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.455 01:24:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.456 01:24:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.456 01:24:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.456 01:24:06 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.456 01:24:06 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.456 01:24:06 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:40.456 INFO: JSON configuration test init 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 01:24:06 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:40.456 01:24:06 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.456 01:24:06 json_config -- json_config/common.sh@10 -- # shift 00:05:40.456 01:24:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.456 01:24:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.456 01:24:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.456 01:24:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.456 01:24:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.456 01:24:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3738886 00:05:40.456 01:24:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.456 Waiting for target to run... 00:05:40.456 01:24:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3738886 /var/tmp/spdk_tgt.sock 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@827 -- # '[' -z 3738886 ']' 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.456 01:24:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.456 01:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.716 [2024-07-12 01:24:06.839440] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:40.716 [2024-07-12 01:24:06.839503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738886 ] 00:05:40.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.976 [2024-07-12 01:24:07.112244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.976 [2024-07-12 01:24:07.129638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.546 01:24:07 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.546 01:24:07 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:41.546 01:24:07 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.546 00:05:41.546 01:24:07 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:41.546 01:24:07 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:41.546 01:24:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.546 01:24:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.546 01:24:07 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:41.546 01:24:07 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:41.546 01:24:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.546 01:24:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.546 01:24:07 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:41.546 01:24:07 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:41.546 01:24:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:41.805 01:24:08 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:41.805 01:24:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:41.805 01:24:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.805 01:24:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:42.064 01:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:42.064 01:24:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.064 01:24:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:42.064 01:24:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.064 01:24:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:42.064 01:24:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.064 01:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.323 MallocForNvmf0 00:05:42.323 01:24:08 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.323 01:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.323 MallocForNvmf1 00:05:42.323 01:24:08 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.323 01:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.583 [2024-07-12 01:24:08.798283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.583 01:24:08 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.583 01:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.842 01:24:08 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.842 01:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.842 01:24:09 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.842 01:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:43.102 01:24:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.102 01:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.102 [2024-07-12 01:24:09.408266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.102 01:24:09 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:43.102 01:24:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.102 01:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.361 01:24:09 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:43.361 01:24:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.361 01:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.361 01:24:09 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:43.361 01:24:09 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.361 01:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.361 MallocBdevForConfigChangeCheck 00:05:43.361 01:24:09 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:43.361 01:24:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.361 01:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.361 01:24:09 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:43.361 01:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.930 01:24:09 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:43.930 INFO: shutting down applications... 00:05:43.930 01:24:09 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:43.930 01:24:09 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:43.930 01:24:09 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:43.930 01:24:09 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:44.190 Calling clear_iscsi_subsystem 00:05:44.190 Calling clear_nvmf_subsystem 00:05:44.190 Calling clear_nbd_subsystem 00:05:44.190 Calling clear_ublk_subsystem 00:05:44.190 Calling clear_vhost_blk_subsystem 00:05:44.190 Calling clear_vhost_scsi_subsystem 00:05:44.190 Calling clear_bdev_subsystem 00:05:44.190 01:24:10 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:44.190 01:24:10 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:44.190 01:24:10 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:44.190 01:24:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.190 01:24:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:44.190 01:24:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:44.449 01:24:10 json_config -- json_config/json_config.sh@345 -- # break 00:05:44.450 01:24:10 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:44.450 01:24:10 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:44.450 01:24:10 json_config -- json_config/common.sh@31 -- # local app=target 00:05:44.450 01:24:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.450 01:24:10 json_config -- json_config/common.sh@35 -- # [[ -n 3738886 ]] 00:05:44.450 01:24:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3738886 00:05:44.450 01:24:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.450 01:24:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.450 01:24:10 json_config -- json_config/common.sh@41 -- # kill -0 3738886 00:05:44.450 01:24:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.019 01:24:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.019 01:24:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.019 01:24:11 json_config -- json_config/common.sh@41 -- # kill -0 3738886 00:05:45.020 01:24:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.020 01:24:11 json_config -- json_config/common.sh@43 -- # break 00:05:45.020 01:24:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.020 01:24:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.020 SPDK target shutdown done 00:05:45.020 01:24:11 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:45.020 INFO: relaunching applications... 00:05:45.020 01:24:11 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.020 01:24:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:45.020 01:24:11 json_config -- json_config/common.sh@10 -- # shift 00:05:45.020 01:24:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.020 01:24:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.020 01:24:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.020 01:24:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.020 01:24:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.020 01:24:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3740305 00:05:45.020 01:24:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.020 Waiting for target to run... 00:05:45.020 01:24:11 json_config -- json_config/common.sh@25 -- # waitforlisten 3740305 /var/tmp/spdk_tgt.sock 00:05:45.020 01:24:11 json_config -- common/autotest_common.sh@827 -- # '[' -z 3740305 ']' 00:05:45.020 01:24:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.020 01:24:11 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.020 01:24:11 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.020 01:24:11 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.020 01:24:11 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.020 01:24:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.020 [2024-07-12 01:24:11.299357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:45.020 [2024-07-12 01:24:11.299440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740305 ] 00:05:45.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.280 [2024-07-12 01:24:11.563582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.280 [2024-07-12 01:24:11.581642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.850 [2024-07-12 01:24:12.056195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.850 [2024-07-12 01:24:12.088557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.850 01:24:12 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.850 01:24:12 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:45.850 01:24:12 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.850 00:05:45.850 01:24:12 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:45.850 01:24:12 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.850 INFO: Checking if target configuration is the same... 00:05:45.850 01:24:12 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.850 01:24:12 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:45.850 01:24:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.850 + '[' 2 -ne 2 ']' 00:05:45.850 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.850 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.850 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.850 +++ basename /dev/fd/62 00:05:45.850 ++ mktemp /tmp/62.XXX 00:05:45.850 + tmp_file_1=/tmp/62.Z9U 00:05:45.850 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.850 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.850 + tmp_file_2=/tmp/spdk_tgt_config.json.btQ 00:05:45.850 + ret=0 00:05:45.850 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.110 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.371 + diff -u /tmp/62.Z9U /tmp/spdk_tgt_config.json.btQ 00:05:46.371 + echo 'INFO: JSON config files are the same' 00:05:46.371 INFO: JSON config files are the same 00:05:46.371 + rm /tmp/62.Z9U /tmp/spdk_tgt_config.json.btQ 00:05:46.371 + exit 0 00:05:46.371 01:24:12 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:46.371 01:24:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:46.371 INFO: changing configuration and checking if this can be detected... 00:05:46.371 01:24:12 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.371 01:24:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.371 01:24:12 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:46.371 01:24:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.371 01:24:12 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.371 + '[' 2 -ne 2 ']' 00:05:46.371 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.371 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.371 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.371 +++ basename /dev/fd/62 00:05:46.371 ++ mktemp /tmp/62.XXX 00:05:46.371 + tmp_file_1=/tmp/62.ZGs 00:05:46.371 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.371 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.371 + tmp_file_2=/tmp/spdk_tgt_config.json.6nW 00:05:46.371 + ret=0 00:05:46.371 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.632 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.632 + diff -u /tmp/62.ZGs /tmp/spdk_tgt_config.json.6nW 00:05:46.632 + ret=1 00:05:46.632 + echo '=== Start of file: /tmp/62.ZGs ===' 00:05:46.632 + cat /tmp/62.ZGs 00:05:46.892 + echo '=== End of file: /tmp/62.ZGs ===' 00:05:46.892 + echo '' 00:05:46.892 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6nW ===' 00:05:46.892 + cat /tmp/spdk_tgt_config.json.6nW 00:05:46.892 + echo '=== End of file: /tmp/spdk_tgt_config.json.6nW ===' 00:05:46.892 + echo '' 00:05:46.892 + rm /tmp/62.ZGs /tmp/spdk_tgt_config.json.6nW 00:05:46.892 + exit 1 00:05:46.892 01:24:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:46.892 INFO: configuration change detected. 00:05:46.892 01:24:12 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:46.892 01:24:12 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:46.892 01:24:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.892 01:24:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@317 -- # [[ -n 3740305 ]] 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.892 01:24:13 json_config -- json_config/json_config.sh@323 -- # killprocess 3740305 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@946 -- # '[' -z 3740305 ']' 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@950 -- # kill -0 3740305 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@951 -- # uname 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3740305 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3740305' 00:05:46.892 killing process with pid 3740305 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@965 -- # kill 3740305 00:05:46.892 01:24:13 json_config -- common/autotest_common.sh@970 -- # wait 3740305 00:05:47.152 01:24:13 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.152 01:24:13 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:47.152 01:24:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.152 01:24:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.152 01:24:13 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:47.152 01:24:13 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:47.152 INFO: Success 00:05:47.152 00:05:47.152 real 0m6.758s 00:05:47.152 user 0m8.165s 00:05:47.152 sys 0m1.666s 00:05:47.152 01:24:13 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.152 01:24:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.152 ************************************ 00:05:47.152 END TEST json_config 00:05:47.152 ************************************ 00:05:47.152 01:24:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.152 01:24:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.152 01:24:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.152 01:24:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.152 ************************************ 00:05:47.152 START TEST json_config_extra_key 00:05:47.152 ************************************ 00:05:47.152 01:24:13 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.414 01:24:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.414 01:24:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.414 01:24:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.414 01:24:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.414 01:24:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.414 01:24:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.414 01:24:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:47.414 01:24:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.414 01:24:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:47.414 INFO: launching applications... 00:05:47.414 01:24:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3741001 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.414 Waiting for target to run... 00:05:47.414 01:24:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3741001 /var/tmp/spdk_tgt.sock 00:05:47.414 01:24:13 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3741001 ']' 00:05:47.415 01:24:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.415 01:24:13 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.415 01:24:13 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.415 01:24:13 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.415 01:24:13 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.415 01:24:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.415 [2024-07-12 01:24:13.670866] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:47.415 [2024-07-12 01:24:13.670948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741001 ] 00:05:47.415 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.676 [2024-07-12 01:24:13.935756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.676 [2024-07-12 01:24:13.953936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.247 01:24:14 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.247 01:24:14 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:48.247 00:05:48.247 01:24:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:48.247 INFO: shutting down applications... 00:05:48.247 01:24:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3741001 ]] 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3741001 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3741001 00:05:48.247 01:24:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3741001 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.817 01:24:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.817 SPDK target shutdown done 00:05:48.817 01:24:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.817 Success 00:05:48.817 00:05:48.817 real 0m1.433s 00:05:48.817 user 0m1.065s 00:05:48.817 sys 0m0.376s 00:05:48.817 01:24:14 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.817 01:24:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.817 ************************************ 00:05:48.817 END TEST json_config_extra_key 00:05:48.817 ************************************ 00:05:48.817 01:24:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.817 01:24:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.817 01:24:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.817 01:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:48.817 ************************************ 00:05:48.817 START TEST alias_rpc 00:05:48.817 ************************************ 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.817 * Looking for test storage... 00:05:48.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:48.817 01:24:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.817 01:24:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3741283 00:05:48.817 01:24:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3741283 00:05:48.817 01:24:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3741283 ']' 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.817 01:24:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.817 [2024-07-12 01:24:15.170523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:48.817 [2024-07-12 01:24:15.170600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741283 ] 00:05:49.077 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.077 [2024-07-12 01:24:15.243111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.077 [2024-07-12 01:24:15.282720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.648 01:24:15 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.648 01:24:15 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:49.648 01:24:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:49.908 01:24:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3741283 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3741283 ']' 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3741283 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3741283 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3741283' 00:05:49.908 killing process with pid 3741283 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@965 -- # kill 3741283 00:05:49.908 01:24:16 alias_rpc -- common/autotest_common.sh@970 -- # wait 3741283 00:05:50.169 00:05:50.169 real 0m1.334s 00:05:50.169 user 0m1.436s 00:05:50.169 sys 0m0.372s 00:05:50.169 01:24:16 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.169 01:24:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.169 ************************************ 00:05:50.169 END TEST alias_rpc 00:05:50.169 ************************************ 00:05:50.169 01:24:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:50.169 01:24:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.169 01:24:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.169 01:24:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.169 01:24:16 -- common/autotest_common.sh@10 -- # set +x 00:05:50.169 ************************************ 00:05:50.169 START TEST spdkcli_tcp 00:05:50.169 ************************************ 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.169 * Looking for test storage... 00:05:50.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3741551 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3741551 00:05:50.169 01:24:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3741551 ']' 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.169 01:24:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.430 [2024-07-12 01:24:16.578791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:50.430 [2024-07-12 01:24:16.578858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741551 ] 00:05:50.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.430 [2024-07-12 01:24:16.650848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.430 [2024-07-12 01:24:16.690956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.430 [2024-07-12 01:24:16.690960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.000 01:24:17 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.000 01:24:17 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:51.000 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3741870 00:05:51.000 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.000 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.260 [ 00:05:51.260 "bdev_malloc_delete", 00:05:51.260 "bdev_malloc_create", 00:05:51.260 "bdev_null_resize", 00:05:51.260 "bdev_null_delete", 00:05:51.260 "bdev_null_create", 00:05:51.260 "bdev_nvme_cuse_unregister", 00:05:51.260 "bdev_nvme_cuse_register", 00:05:51.261 "bdev_opal_new_user", 00:05:51.261 "bdev_opal_set_lock_state", 00:05:51.261 "bdev_opal_delete", 00:05:51.261 "bdev_opal_get_info", 00:05:51.261 "bdev_opal_create", 00:05:51.261 "bdev_nvme_opal_revert", 00:05:51.261 "bdev_nvme_opal_init", 00:05:51.261 "bdev_nvme_send_cmd", 00:05:51.261 "bdev_nvme_get_path_iostat", 00:05:51.261 "bdev_nvme_get_mdns_discovery_info", 00:05:51.261 "bdev_nvme_stop_mdns_discovery", 00:05:51.261 "bdev_nvme_start_mdns_discovery", 00:05:51.261 "bdev_nvme_set_multipath_policy", 00:05:51.261 "bdev_nvme_set_preferred_path", 00:05:51.261 "bdev_nvme_get_io_paths", 00:05:51.261 "bdev_nvme_remove_error_injection", 00:05:51.261 "bdev_nvme_add_error_injection", 00:05:51.261 "bdev_nvme_get_discovery_info", 00:05:51.261 "bdev_nvme_stop_discovery", 00:05:51.261 "bdev_nvme_start_discovery", 00:05:51.261 "bdev_nvme_get_controller_health_info", 00:05:51.261 "bdev_nvme_disable_controller", 00:05:51.261 "bdev_nvme_enable_controller", 00:05:51.261 "bdev_nvme_reset_controller", 00:05:51.261 "bdev_nvme_get_transport_statistics", 00:05:51.261 "bdev_nvme_apply_firmware", 00:05:51.261 "bdev_nvme_detach_controller", 00:05:51.261 "bdev_nvme_get_controllers", 00:05:51.261 "bdev_nvme_attach_controller", 00:05:51.261 "bdev_nvme_set_hotplug", 00:05:51.261 "bdev_nvme_set_options", 00:05:51.261 "bdev_passthru_delete", 00:05:51.261 "bdev_passthru_create", 00:05:51.261 "bdev_lvol_set_parent_bdev", 00:05:51.261 "bdev_lvol_set_parent", 00:05:51.261 "bdev_lvol_check_shallow_copy", 00:05:51.261 "bdev_lvol_start_shallow_copy", 00:05:51.261 "bdev_lvol_grow_lvstore", 00:05:51.261 "bdev_lvol_get_lvols", 00:05:51.261 "bdev_lvol_get_lvstores", 00:05:51.261 "bdev_lvol_delete", 00:05:51.261 "bdev_lvol_set_read_only", 00:05:51.261 "bdev_lvol_resize", 00:05:51.261 "bdev_lvol_decouple_parent", 00:05:51.261 "bdev_lvol_inflate", 00:05:51.261 "bdev_lvol_rename", 00:05:51.261 "bdev_lvol_clone_bdev", 00:05:51.261 "bdev_lvol_clone", 00:05:51.261 "bdev_lvol_snapshot", 00:05:51.261 "bdev_lvol_create", 00:05:51.261 "bdev_lvol_delete_lvstore", 00:05:51.261 "bdev_lvol_rename_lvstore", 00:05:51.261 "bdev_lvol_create_lvstore", 00:05:51.261 "bdev_raid_set_options", 00:05:51.261 "bdev_raid_remove_base_bdev", 00:05:51.261 "bdev_raid_add_base_bdev", 00:05:51.261 "bdev_raid_delete", 00:05:51.261 "bdev_raid_create", 00:05:51.261 "bdev_raid_get_bdevs", 00:05:51.261 "bdev_error_inject_error", 00:05:51.261 "bdev_error_delete", 00:05:51.261 "bdev_error_create", 00:05:51.261 "bdev_split_delete", 00:05:51.261 "bdev_split_create", 00:05:51.261 "bdev_delay_delete", 00:05:51.261 "bdev_delay_create", 00:05:51.261 "bdev_delay_update_latency", 00:05:51.261 "bdev_zone_block_delete", 00:05:51.261 "bdev_zone_block_create", 00:05:51.261 "blobfs_create", 00:05:51.261 "blobfs_detect", 00:05:51.261 "blobfs_set_cache_size", 00:05:51.261 "bdev_aio_delete", 00:05:51.261 "bdev_aio_rescan", 00:05:51.261 "bdev_aio_create", 00:05:51.261 "bdev_ftl_set_property", 00:05:51.261 "bdev_ftl_get_properties", 00:05:51.261 "bdev_ftl_get_stats", 00:05:51.261 "bdev_ftl_unmap", 00:05:51.261 "bdev_ftl_unload", 00:05:51.261 "bdev_ftl_delete", 00:05:51.261 "bdev_ftl_load", 00:05:51.261 "bdev_ftl_create", 00:05:51.261 "bdev_virtio_attach_controller", 00:05:51.261 "bdev_virtio_scsi_get_devices", 00:05:51.261 "bdev_virtio_detach_controller", 00:05:51.261 "bdev_virtio_blk_set_hotplug", 00:05:51.261 "bdev_iscsi_delete", 00:05:51.261 "bdev_iscsi_create", 00:05:51.261 "bdev_iscsi_set_options", 00:05:51.261 "accel_error_inject_error", 00:05:51.261 "ioat_scan_accel_module", 00:05:51.261 "dsa_scan_accel_module", 00:05:51.261 "iaa_scan_accel_module", 00:05:51.261 "vfu_virtio_create_scsi_endpoint", 00:05:51.261 "vfu_virtio_scsi_remove_target", 00:05:51.261 "vfu_virtio_scsi_add_target", 00:05:51.261 "vfu_virtio_create_blk_endpoint", 00:05:51.261 "vfu_virtio_delete_endpoint", 00:05:51.261 "keyring_file_remove_key", 00:05:51.261 "keyring_file_add_key", 00:05:51.261 "keyring_linux_set_options", 00:05:51.261 "iscsi_get_histogram", 00:05:51.261 "iscsi_enable_histogram", 00:05:51.261 "iscsi_set_options", 00:05:51.261 "iscsi_get_auth_groups", 00:05:51.261 "iscsi_auth_group_remove_secret", 00:05:51.261 "iscsi_auth_group_add_secret", 00:05:51.261 "iscsi_delete_auth_group", 00:05:51.261 "iscsi_create_auth_group", 00:05:51.261 "iscsi_set_discovery_auth", 00:05:51.261 "iscsi_get_options", 00:05:51.261 "iscsi_target_node_request_logout", 00:05:51.261 "iscsi_target_node_set_redirect", 00:05:51.261 "iscsi_target_node_set_auth", 00:05:51.261 "iscsi_target_node_add_lun", 00:05:51.261 "iscsi_get_stats", 00:05:51.261 "iscsi_get_connections", 00:05:51.261 "iscsi_portal_group_set_auth", 00:05:51.261 "iscsi_start_portal_group", 00:05:51.261 "iscsi_delete_portal_group", 00:05:51.261 "iscsi_create_portal_group", 00:05:51.261 "iscsi_get_portal_groups", 00:05:51.261 "iscsi_delete_target_node", 00:05:51.261 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.261 "iscsi_target_node_add_pg_ig_maps", 00:05:51.261 "iscsi_create_target_node", 00:05:51.261 "iscsi_get_target_nodes", 00:05:51.261 "iscsi_delete_initiator_group", 00:05:51.261 "iscsi_initiator_group_remove_initiators", 00:05:51.261 "iscsi_initiator_group_add_initiators", 00:05:51.261 "iscsi_create_initiator_group", 00:05:51.261 "iscsi_get_initiator_groups", 00:05:51.261 "nvmf_set_crdt", 00:05:51.261 "nvmf_set_config", 00:05:51.261 "nvmf_set_max_subsystems", 00:05:51.261 "nvmf_stop_mdns_prr", 00:05:51.261 "nvmf_publish_mdns_prr", 00:05:51.261 "nvmf_subsystem_get_listeners", 00:05:51.261 "nvmf_subsystem_get_qpairs", 00:05:51.261 "nvmf_subsystem_get_controllers", 00:05:51.261 "nvmf_get_stats", 00:05:51.261 "nvmf_get_transports", 00:05:51.261 "nvmf_create_transport", 00:05:51.261 "nvmf_get_targets", 00:05:51.261 "nvmf_delete_target", 00:05:51.261 "nvmf_create_target", 00:05:51.261 "nvmf_subsystem_allow_any_host", 00:05:51.261 "nvmf_subsystem_remove_host", 00:05:51.261 "nvmf_subsystem_add_host", 00:05:51.261 "nvmf_ns_remove_host", 00:05:51.261 "nvmf_ns_add_host", 00:05:51.261 "nvmf_subsystem_remove_ns", 00:05:51.261 "nvmf_subsystem_add_ns", 00:05:51.261 "nvmf_subsystem_listener_set_ana_state", 00:05:51.261 "nvmf_discovery_get_referrals", 00:05:51.261 "nvmf_discovery_remove_referral", 00:05:51.261 "nvmf_discovery_add_referral", 00:05:51.261 "nvmf_subsystem_remove_listener", 00:05:51.261 "nvmf_subsystem_add_listener", 00:05:51.261 "nvmf_delete_subsystem", 00:05:51.261 "nvmf_create_subsystem", 00:05:51.261 "nvmf_get_subsystems", 00:05:51.261 "env_dpdk_get_mem_stats", 00:05:51.261 "nbd_get_disks", 00:05:51.261 "nbd_stop_disk", 00:05:51.261 "nbd_start_disk", 00:05:51.261 "ublk_recover_disk", 00:05:51.261 "ublk_get_disks", 00:05:51.261 "ublk_stop_disk", 00:05:51.261 "ublk_start_disk", 00:05:51.261 "ublk_destroy_target", 00:05:51.261 "ublk_create_target", 00:05:51.261 "virtio_blk_create_transport", 00:05:51.261 "virtio_blk_get_transports", 00:05:51.261 "vhost_controller_set_coalescing", 00:05:51.261 "vhost_get_controllers", 00:05:51.261 "vhost_delete_controller", 00:05:51.261 "vhost_create_blk_controller", 00:05:51.261 "vhost_scsi_controller_remove_target", 00:05:51.261 "vhost_scsi_controller_add_target", 00:05:51.261 "vhost_start_scsi_controller", 00:05:51.261 "vhost_create_scsi_controller", 00:05:51.261 "thread_set_cpumask", 00:05:51.261 "framework_get_scheduler", 00:05:51.261 "framework_set_scheduler", 00:05:51.262 "framework_get_reactors", 00:05:51.262 "thread_get_io_channels", 00:05:51.262 "thread_get_pollers", 00:05:51.262 "thread_get_stats", 00:05:51.262 "framework_monitor_context_switch", 00:05:51.262 "spdk_kill_instance", 00:05:51.262 "log_enable_timestamps", 00:05:51.262 "log_get_flags", 00:05:51.262 "log_clear_flag", 00:05:51.262 "log_set_flag", 00:05:51.262 "log_get_level", 00:05:51.262 "log_set_level", 00:05:51.262 "log_get_print_level", 00:05:51.262 "log_set_print_level", 00:05:51.262 "framework_enable_cpumask_locks", 00:05:51.262 "framework_disable_cpumask_locks", 00:05:51.262 "framework_wait_init", 00:05:51.262 "framework_start_init", 00:05:51.262 "scsi_get_devices", 00:05:51.262 "bdev_get_histogram", 00:05:51.262 "bdev_enable_histogram", 00:05:51.262 "bdev_set_qos_limit", 00:05:51.262 "bdev_set_qd_sampling_period", 00:05:51.262 "bdev_get_bdevs", 00:05:51.262 "bdev_reset_iostat", 00:05:51.262 "bdev_get_iostat", 00:05:51.262 "bdev_examine", 00:05:51.262 "bdev_wait_for_examine", 00:05:51.262 "bdev_set_options", 00:05:51.262 "notify_get_notifications", 00:05:51.262 "notify_get_types", 00:05:51.262 "accel_get_stats", 00:05:51.262 "accel_set_options", 00:05:51.262 "accel_set_driver", 00:05:51.262 "accel_crypto_key_destroy", 00:05:51.262 "accel_crypto_keys_get", 00:05:51.262 "accel_crypto_key_create", 00:05:51.262 "accel_assign_opc", 00:05:51.262 "accel_get_module_info", 00:05:51.262 "accel_get_opc_assignments", 00:05:51.262 "vmd_rescan", 00:05:51.262 "vmd_remove_device", 00:05:51.262 "vmd_enable", 00:05:51.262 "sock_get_default_impl", 00:05:51.262 "sock_set_default_impl", 00:05:51.262 "sock_impl_set_options", 00:05:51.262 "sock_impl_get_options", 00:05:51.262 "iobuf_get_stats", 00:05:51.262 "iobuf_set_options", 00:05:51.262 "keyring_get_keys", 00:05:51.262 "framework_get_pci_devices", 00:05:51.262 "framework_get_config", 00:05:51.262 "framework_get_subsystems", 00:05:51.262 "vfu_tgt_set_base_path", 00:05:51.262 "trace_get_info", 00:05:51.262 "trace_get_tpoint_group_mask", 00:05:51.262 "trace_disable_tpoint_group", 00:05:51.262 "trace_enable_tpoint_group", 00:05:51.262 "trace_clear_tpoint_mask", 00:05:51.262 "trace_set_tpoint_mask", 00:05:51.262 "spdk_get_version", 00:05:51.262 "rpc_get_methods" 00:05:51.262 ] 00:05:51.262 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.262 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.262 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3741551 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3741551 ']' 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3741551 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3741551 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3741551' 00:05:51.262 killing process with pid 3741551 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3741551 00:05:51.262 01:24:17 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3741551 00:05:51.522 00:05:51.522 real 0m1.388s 00:05:51.522 user 0m2.573s 00:05:51.522 sys 0m0.425s 00:05:51.522 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.522 01:24:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.522 ************************************ 00:05:51.522 END TEST spdkcli_tcp 00:05:51.522 ************************************ 00:05:51.522 01:24:17 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.522 01:24:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.522 01:24:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.522 01:24:17 -- common/autotest_common.sh@10 -- # set +x 00:05:51.522 ************************************ 00:05:51.522 START TEST dpdk_mem_utility 00:05:51.522 ************************************ 00:05:51.522 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.782 * Looking for test storage... 00:05:51.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:51.782 01:24:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.782 01:24:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3741936 00:05:51.782 01:24:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3741936 00:05:51.782 01:24:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.782 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3741936 ']' 00:05:51.782 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.782 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.782 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.782 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.782 01:24:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.782 [2024-07-12 01:24:18.026908] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:51.782 [2024-07-12 01:24:18.026980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741936 ] 00:05:51.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.782 [2024-07-12 01:24:18.100651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.042 [2024-07-12 01:24:18.138879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.612 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.612 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:52.612 01:24:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.612 01:24:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.612 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.612 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.612 { 00:05:52.612 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.612 } 00:05:52.612 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.612 01:24:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.612 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:52.612 1 heaps totaling size 814.000000 MiB 00:05:52.612 size: 814.000000 MiB heap id: 0 00:05:52.612 end heaps---------- 00:05:52.612 8 mempools totaling size 598.116089 MiB 00:05:52.612 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.612 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.612 size: 84.521057 MiB name: bdev_io_3741936 00:05:52.612 size: 51.011292 MiB name: evtpool_3741936 00:05:52.612 size: 50.003479 MiB name: msgpool_3741936 00:05:52.612 size: 21.763794 MiB name: PDU_Pool 00:05:52.612 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.612 size: 0.026123 MiB name: Session_Pool 00:05:52.612 end mempools------- 00:05:52.612 6 memzones totaling size 4.142822 MiB 00:05:52.612 size: 1.000366 MiB name: RG_ring_0_3741936 00:05:52.612 size: 1.000366 MiB name: RG_ring_1_3741936 00:05:52.612 size: 1.000366 MiB name: RG_ring_4_3741936 00:05:52.612 size: 1.000366 MiB name: RG_ring_5_3741936 00:05:52.612 size: 0.125366 MiB name: RG_ring_2_3741936 00:05:52.612 size: 0.015991 MiB name: RG_ring_3_3741936 00:05:52.612 end memzones------- 00:05:52.612 01:24:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.612 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:52.612 list of free elements. size: 12.519348 MiB 00:05:52.612 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:52.612 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:52.612 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:52.612 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:52.612 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:52.612 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:52.612 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:52.612 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:52.612 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:52.612 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:52.612 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:52.612 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:52.612 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:52.612 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:52.612 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:52.612 list of standard malloc elements. size: 199.218079 MiB 00:05:52.612 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:52.612 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:52.612 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:52.612 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:52.612 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:52.612 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.612 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:52.613 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.613 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:52.613 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:52.613 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:52.613 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:52.613 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:52.613 list of memzone associated elements. size: 602.262573 MiB 00:05:52.613 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:52.613 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.613 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:52.613 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.613 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:52.613 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3741936_0 00:05:52.613 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:52.613 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3741936_0 00:05:52.613 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:52.613 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3741936_0 00:05:52.613 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:52.613 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.613 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:52.613 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.613 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:52.613 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3741936 00:05:52.613 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:52.613 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3741936 00:05:52.613 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.613 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3741936 00:05:52.613 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:52.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.613 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:52.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.613 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:52.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.613 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:52.613 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.613 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:52.613 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3741936 00:05:52.613 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:52.613 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3741936 00:05:52.613 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:52.613 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3741936 00:05:52.613 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:52.613 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3741936 00:05:52.613 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:52.613 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3741936 00:05:52.613 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:52.613 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.613 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:52.613 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.613 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:52.613 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.613 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:52.613 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3741936 00:05:52.613 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:52.613 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.613 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:52.613 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.613 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:52.613 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3741936 00:05:52.613 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:52.613 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.613 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:52.613 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3741936 00:05:52.613 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:52.613 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3741936 00:05:52.613 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:52.613 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.613 01:24:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.613 01:24:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3741936 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3741936 ']' 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3741936 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3741936 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.613 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.873 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3741936' 00:05:52.873 killing process with pid 3741936 00:05:52.873 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3741936 00:05:52.873 01:24:18 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3741936 00:05:52.873 00:05:52.873 real 0m1.288s 00:05:52.873 user 0m1.361s 00:05:52.873 sys 0m0.389s 00:05:52.873 01:24:19 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.873 01:24:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.873 ************************************ 00:05:52.873 END TEST dpdk_mem_utility 00:05:52.873 ************************************ 00:05:52.873 01:24:19 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:52.874 01:24:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.874 01:24:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.874 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.135 ************************************ 00:05:53.135 START TEST event 00:05:53.135 ************************************ 00:05:53.135 01:24:19 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:53.135 * Looking for test storage... 00:05:53.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:53.135 01:24:19 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:53.135 01:24:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.135 01:24:19 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.135 01:24:19 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:53.135 01:24:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.135 01:24:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.135 ************************************ 00:05:53.135 START TEST event_perf 00:05:53.135 ************************************ 00:05:53.135 01:24:19 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.135 Running I/O for 1 seconds...[2024-07-12 01:24:19.388105] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:53.135 [2024-07-12 01:24:19.388189] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742326 ] 00:05:53.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.135 [2024-07-12 01:24:19.460165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.396 [2024-07-12 01:24:19.496365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.396 [2024-07-12 01:24:19.496481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.396 [2024-07-12 01:24:19.496635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.396 Running I/O for 1 seconds...[2024-07-12 01:24:19.496635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.340 00:05:54.340 lcore 0: 167127 00:05:54.340 lcore 1: 167130 00:05:54.340 lcore 2: 167124 00:05:54.340 lcore 3: 167127 00:05:54.340 done. 00:05:54.340 00:05:54.340 real 0m1.170s 00:05:54.340 user 0m4.090s 00:05:54.340 sys 0m0.076s 00:05:54.340 01:24:20 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.340 01:24:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.340 ************************************ 00:05:54.340 END TEST event_perf 00:05:54.340 ************************************ 00:05:54.340 01:24:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.340 01:24:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:54.340 01:24:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.340 01:24:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.340 ************************************ 00:05:54.340 START TEST event_reactor 00:05:54.340 ************************************ 00:05:54.341 01:24:20 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.341 [2024-07-12 01:24:20.633156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:54.341 [2024-07-12 01:24:20.633258] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742684 ] 00:05:54.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.602 [2024-07-12 01:24:20.716518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.602 [2024-07-12 01:24:20.746821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.561 test_start 00:05:55.561 oneshot 00:05:55.561 tick 100 00:05:55.561 tick 100 00:05:55.561 tick 250 00:05:55.561 tick 100 00:05:55.561 tick 100 00:05:55.561 tick 250 00:05:55.561 tick 100 00:05:55.561 tick 500 00:05:55.561 tick 100 00:05:55.561 tick 100 00:05:55.561 tick 250 00:05:55.561 tick 100 00:05:55.561 tick 100 00:05:55.561 test_end 00:05:55.561 00:05:55.561 real 0m1.174s 00:05:55.561 user 0m1.088s 00:05:55.561 sys 0m0.078s 00:05:55.561 01:24:21 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.561 01:24:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.561 ************************************ 00:05:55.561 END TEST event_reactor 00:05:55.561 ************************************ 00:05:55.561 01:24:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.561 01:24:21 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:55.561 01:24:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.561 01:24:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.561 ************************************ 00:05:55.561 START TEST event_reactor_perf 00:05:55.561 ************************************ 00:05:55.561 01:24:21 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.561 [2024-07-12 01:24:21.887161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:55.561 [2024-07-12 01:24:21.887267] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742887 ] 00:05:55.820 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.820 [2024-07-12 01:24:21.956150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.820 [2024-07-12 01:24:21.987280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.762 test_start 00:05:56.762 test_end 00:05:56.762 Performance: 366383 events per second 00:05:56.762 00:05:56.762 real 0m1.162s 00:05:56.762 user 0m1.082s 00:05:56.762 sys 0m0.076s 00:05:56.762 01:24:23 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.762 01:24:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.762 ************************************ 00:05:56.762 END TEST event_reactor_perf 00:05:56.762 ************************************ 00:05:56.762 01:24:23 event -- event/event.sh@49 -- # uname -s 00:05:56.762 01:24:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.762 01:24:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.762 01:24:23 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.762 01:24:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.762 01:24:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.762 ************************************ 00:05:56.762 START TEST event_scheduler 00:05:56.762 ************************************ 00:05:56.762 01:24:23 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:57.022 * Looking for test storage... 00:05:57.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:57.022 01:24:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.023 01:24:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3743106 00:05:57.023 01:24:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.023 01:24:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.023 01:24:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3743106 00:05:57.023 01:24:23 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3743106 ']' 00:05:57.023 01:24:23 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.023 01:24:23 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.023 01:24:23 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.023 01:24:23 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.023 01:24:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.023 [2024-07-12 01:24:23.254949] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:57.023 [2024-07-12 01:24:23.255015] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743106 ] 00:05:57.023 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.023 [2024-07-12 01:24:23.316126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.023 [2024-07-12 01:24:23.354519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.023 [2024-07-12 01:24:23.354684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.023 [2024-07-12 01:24:23.354846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.023 [2024-07-12 01:24:23.354847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:57.964 01:24:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 POWER: Env isn't set yet! 00:05:57.964 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:57.964 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.964 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.964 POWER: Attempting to initialise PSTAT power management... 00:05:57.964 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:57.964 POWER: Initialized successfully for lcore 0 power management 00:05:57.964 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:57.964 POWER: Initialized successfully for lcore 1 power management 00:05:57.964 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:57.964 POWER: Initialized successfully for lcore 2 power management 00:05:57.964 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:57.964 POWER: Initialized successfully for lcore 3 power management 00:05:57.964 [2024-07-12 01:24:24.065585] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.964 [2024-07-12 01:24:24.065598] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.964 [2024-07-12 01:24:24.065604] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 [2024-07-12 01:24:24.115692] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 ************************************ 00:05:57.964 START TEST scheduler_create_thread 00:05:57.964 ************************************ 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 2 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 3 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 4 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 5 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 6 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 7 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.964 8 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.964 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.535 9 00:05:58.535 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.535 01:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.535 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.535 01:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.920 10 00:05:59.920 01:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.920 01:24:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.921 01:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.921 01:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.492 01:24:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.492 01:24:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.492 01:24:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.492 01:24:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.492 01:24:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.435 01:24:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.435 01:24:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.435 01:24:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.435 01:24:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.006 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.006 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:02.006 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:02.006 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.006 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.578 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.578 00:06:02.579 real 0m4.614s 00:06:02.579 user 0m0.028s 00:06:02.579 sys 0m0.003s 00:06:02.579 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.579 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.579 ************************************ 00:06:02.579 END TEST scheduler_create_thread 00:06:02.579 ************************************ 00:06:02.579 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.579 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3743106 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3743106 ']' 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3743106 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3743106 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3743106' 00:06:02.579 killing process with pid 3743106 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3743106 00:06:02.579 01:24:28 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3743106 00:06:02.842 [2024-07-12 01:24:29.048751] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.842 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:02.842 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:02.842 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:02.842 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:02.842 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:02.842 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:02.842 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:02.842 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:03.101 00:06:03.101 real 0m6.121s 00:06:03.101 user 0m13.753s 00:06:03.101 sys 0m0.356s 00:06:03.101 01:24:29 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.101 01:24:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.101 ************************************ 00:06:03.101 END TEST event_scheduler 00:06:03.101 ************************************ 00:06:03.101 01:24:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.101 01:24:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.101 01:24:29 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.101 01:24:29 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.101 01:24:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.101 ************************************ 00:06:03.101 START TEST app_repeat 00:06:03.101 ************************************ 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3744485 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3744485' 00:06:03.101 Process app_repeat pid: 3744485 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.101 spdk_app_start Round 0 00:06:03.101 01:24:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3744485 /var/tmp/spdk-nbd.sock 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3744485 ']' 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.101 01:24:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.101 [2024-07-12 01:24:29.339240] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:03.101 [2024-07-12 01:24:29.339351] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744485 ] 00:06:03.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.101 [2024-07-12 01:24:29.415849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.101 [2024-07-12 01:24:29.446996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.101 [2024-07-12 01:24:29.446999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.360 01:24:29 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.360 01:24:29 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:03.360 01:24:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.360 Malloc0 00:06:03.360 01:24:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.621 Malloc1 00:06:03.621 01:24:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.621 01:24:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.880 /dev/nbd0 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.880 1+0 records in 00:06:03.880 1+0 records out 00:06:03.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214853 s, 19.1 MB/s 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.880 /dev/nbd1 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.880 1+0 records in 00:06:03.880 1+0 records out 00:06:03.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274489 s, 14.9 MB/s 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:03.880 01:24:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.880 01:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.139 { 00:06:04.139 "nbd_device": "/dev/nbd0", 00:06:04.139 "bdev_name": "Malloc0" 00:06:04.139 }, 00:06:04.139 { 00:06:04.139 "nbd_device": "/dev/nbd1", 00:06:04.139 "bdev_name": "Malloc1" 00:06:04.139 } 00:06:04.139 ]' 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.139 { 00:06:04.139 "nbd_device": "/dev/nbd0", 00:06:04.139 "bdev_name": "Malloc0" 00:06:04.139 }, 00:06:04.139 { 00:06:04.139 "nbd_device": "/dev/nbd1", 00:06:04.139 "bdev_name": "Malloc1" 00:06:04.139 } 00:06:04.139 ]' 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.139 /dev/nbd1' 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.139 /dev/nbd1' 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.139 01:24:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.140 256+0 records in 00:06:04.140 256+0 records out 00:06:04.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01183 s, 88.6 MB/s 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.140 256+0 records in 00:06:04.140 256+0 records out 00:06:04.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157192 s, 66.7 MB/s 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.140 256+0 records in 00:06:04.140 256+0 records out 00:06:04.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170999 s, 61.3 MB/s 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.140 01:24:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.400 01:24:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.660 01:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.920 01:24:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.920 01:24:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.920 01:24:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.180 [2024-07-12 01:24:31.309844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.180 [2024-07-12 01:24:31.340004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.180 [2024-07-12 01:24:31.340007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.180 [2024-07-12 01:24:31.371694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.180 [2024-07-12 01:24:31.371729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.542 01:24:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.542 01:24:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:08.542 spdk_app_start Round 1 00:06:08.542 01:24:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3744485 /var/tmp/spdk-nbd.sock 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3744485 ']' 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:08.542 01:24:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.542 Malloc0 00:06:08.542 01:24:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.542 Malloc1 00:06:08.542 01:24:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.542 /dev/nbd0 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.542 1+0 records in 00:06:08.542 1+0 records out 00:06:08.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269892 s, 15.2 MB/s 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:08.542 01:24:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.542 01:24:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.803 /dev/nbd1 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.803 1+0 records in 00:06:08.803 1+0 records out 00:06:08.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218466 s, 18.7 MB/s 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:08.803 01:24:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.803 01:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.063 { 00:06:09.063 "nbd_device": "/dev/nbd0", 00:06:09.063 "bdev_name": "Malloc0" 00:06:09.063 }, 00:06:09.063 { 00:06:09.063 "nbd_device": "/dev/nbd1", 00:06:09.063 "bdev_name": "Malloc1" 00:06:09.063 } 00:06:09.063 ]' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.063 { 00:06:09.063 "nbd_device": "/dev/nbd0", 00:06:09.063 "bdev_name": "Malloc0" 00:06:09.063 }, 00:06:09.063 { 00:06:09.063 "nbd_device": "/dev/nbd1", 00:06:09.063 "bdev_name": "Malloc1" 00:06:09.063 } 00:06:09.063 ]' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.063 /dev/nbd1' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.063 /dev/nbd1' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.063 01:24:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.063 256+0 records in 00:06:09.063 256+0 records out 00:06:09.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117875 s, 89.0 MB/s 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.064 256+0 records in 00:06:09.064 256+0 records out 00:06:09.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159451 s, 65.8 MB/s 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.064 256+0 records in 00:06:09.064 256+0 records out 00:06:09.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164804 s, 63.6 MB/s 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.064 01:24:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.323 01:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.582 01:24:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.582 01:24:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.842 01:24:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.842 [2024-07-12 01:24:36.144909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.842 [2024-07-12 01:24:36.174925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.842 [2024-07-12 01:24:36.174927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.102 [2024-07-12 01:24:36.207280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.102 [2024-07-12 01:24:36.207317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.404 01:24:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.404 01:24:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.404 spdk_app_start Round 2 00:06:13.404 01:24:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3744485 /var/tmp/spdk-nbd.sock 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3744485 ']' 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:13.405 01:24:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.405 Malloc0 00:06:13.405 01:24:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.405 Malloc1 00:06:13.405 01:24:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.405 /dev/nbd0 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.405 1+0 records in 00:06:13.405 1+0 records out 00:06:13.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.7825e-05 s, 41.9 MB/s 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.405 01:24:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.405 01:24:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.666 /dev/nbd1 00:06:13.666 01:24:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.666 01:24:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.666 01:24:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.667 1+0 records in 00:06:13.667 1+0 records out 00:06:13.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276831 s, 14.8 MB/s 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.667 01:24:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.667 01:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.667 01:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.667 01:24:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.667 01:24:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.667 01:24:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.928 { 00:06:13.928 "nbd_device": "/dev/nbd0", 00:06:13.928 "bdev_name": "Malloc0" 00:06:13.928 }, 00:06:13.928 { 00:06:13.928 "nbd_device": "/dev/nbd1", 00:06:13.928 "bdev_name": "Malloc1" 00:06:13.928 } 00:06:13.928 ]' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.928 { 00:06:13.928 "nbd_device": "/dev/nbd0", 00:06:13.928 "bdev_name": "Malloc0" 00:06:13.928 }, 00:06:13.928 { 00:06:13.928 "nbd_device": "/dev/nbd1", 00:06:13.928 "bdev_name": "Malloc1" 00:06:13.928 } 00:06:13.928 ]' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.928 /dev/nbd1' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.928 /dev/nbd1' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.928 256+0 records in 00:06:13.928 256+0 records out 00:06:13.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112967 s, 92.8 MB/s 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.928 256+0 records in 00:06:13.928 256+0 records out 00:06:13.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169486 s, 61.9 MB/s 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.928 01:24:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.929 256+0 records in 00:06:13.929 256+0 records out 00:06:13.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169617 s, 61.8 MB/s 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.929 01:24:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.190 01:24:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.451 01:24:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.451 01:24:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.711 01:24:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.711 [2024-07-12 01:24:40.973601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.711 [2024-07-12 01:24:41.004499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.711 [2024-07-12 01:24:41.004501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.711 [2024-07-12 01:24:41.036407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.711 [2024-07-12 01:24:41.036440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.008 01:24:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3744485 /var/tmp/spdk-nbd.sock 00:06:18.008 01:24:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3744485 ']' 00:06:18.008 01:24:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.008 01:24:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.008 01:24:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.008 01:24:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.008 01:24:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:18.008 01:24:44 event.app_repeat -- event/event.sh@39 -- # killprocess 3744485 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3744485 ']' 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3744485 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3744485 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3744485' 00:06:18.008 killing process with pid 3744485 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3744485 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3744485 00:06:18.008 spdk_app_start is called in Round 0. 00:06:18.008 Shutdown signal received, stop current app iteration 00:06:18.008 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:18.008 spdk_app_start is called in Round 1. 00:06:18.008 Shutdown signal received, stop current app iteration 00:06:18.008 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:18.008 spdk_app_start is called in Round 2. 00:06:18.008 Shutdown signal received, stop current app iteration 00:06:18.008 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:18.008 spdk_app_start is called in Round 3. 00:06:18.008 Shutdown signal received, stop current app iteration 00:06:18.008 01:24:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.008 01:24:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.008 00:06:18.008 real 0m14.865s 00:06:18.008 user 0m32.261s 00:06:18.008 sys 0m2.121s 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.008 01:24:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.008 ************************************ 00:06:18.008 END TEST app_repeat 00:06:18.008 ************************************ 00:06:18.008 01:24:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.008 01:24:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.008 01:24:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.008 01:24:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.008 01:24:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.008 ************************************ 00:06:18.008 START TEST cpu_locks 00:06:18.008 ************************************ 00:06:18.008 01:24:44 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.008 * Looking for test storage... 00:06:18.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:18.008 01:24:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.009 01:24:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.009 01:24:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.009 01:24:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.009 01:24:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.009 01:24:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.009 01:24:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.269 ************************************ 00:06:18.269 START TEST default_locks 00:06:18.269 ************************************ 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3747730 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3747730 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3747730 ']' 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.269 01:24:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.269 [2024-07-12 01:24:44.422341] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:18.269 [2024-07-12 01:24:44.422391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747730 ] 00:06:18.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.269 [2024-07-12 01:24:44.490521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.269 [2024-07-12 01:24:44.525706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.840 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.840 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:18.840 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3747730 00:06:18.840 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3747730 00:06:18.840 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.100 lslocks: write error 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3747730 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3747730 ']' 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3747730 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3747730 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3747730' 00:06:19.100 killing process with pid 3747730 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3747730 00:06:19.100 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3747730 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3747730 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3747730 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.360 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3747730 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3747730 ']' 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3747730) - No such process 00:06:19.361 ERROR: process (pid: 3747730) is no longer running 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.361 00:06:19.361 real 0m1.143s 00:06:19.361 user 0m1.219s 00:06:19.361 sys 0m0.332s 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.361 01:24:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.361 ************************************ 00:06:19.361 END TEST default_locks 00:06:19.361 ************************************ 00:06:19.361 01:24:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.361 01:24:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.361 01:24:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.361 01:24:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.361 ************************************ 00:06:19.361 START TEST default_locks_via_rpc 00:06:19.361 ************************************ 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3748088 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3748088 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3748088 ']' 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.361 01:24:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.361 [2024-07-12 01:24:45.644586] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:19.361 [2024-07-12 01:24:45.644638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748088 ] 00:06:19.361 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.361 [2024-07-12 01:24:45.711135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.620 [2024-07-12 01:24:45.745002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3748088 ']' 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3748088' 00:06:20.188 killing process with pid 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3748088 00:06:20.188 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3748088 00:06:20.448 00:06:20.448 real 0m1.148s 00:06:20.448 user 0m1.202s 00:06:20.448 sys 0m0.360s 00:06:20.448 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.448 01:24:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 ************************************ 00:06:20.448 END TEST default_locks_via_rpc 00:06:20.448 ************************************ 00:06:20.448 01:24:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.448 01:24:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.448 01:24:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.448 01:24:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 ************************************ 00:06:20.448 START TEST non_locking_app_on_locked_coremask 00:06:20.448 ************************************ 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3748216 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3748216 /var/tmp/spdk.sock 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3748216 ']' 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.448 01:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.707 [2024-07-12 01:24:46.848221] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:20.707 [2024-07-12 01:24:46.848279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748216 ] 00:06:20.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.707 [2024-07-12 01:24:46.916317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.707 [2024-07-12 01:24:46.952720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3748469 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3748469 /var/tmp/spdk2.sock 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3748469 ']' 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.275 01:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.535 [2024-07-12 01:24:47.652867] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:21.535 [2024-07-12 01:24:47.652923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748469 ] 00:06:21.535 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.535 [2024-07-12 01:24:47.750275] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.535 [2024-07-12 01:24:47.750306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.535 [2024-07-12 01:24:47.813477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.105 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.105 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:22.105 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3748216 00:06:22.105 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3748216 00:06:22.105 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.676 lslocks: write error 00:06:22.676 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3748216 00:06:22.676 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3748216 ']' 00:06:22.676 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3748216 00:06:22.676 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:22.676 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.676 01:24:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3748216 00:06:22.676 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.676 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.676 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3748216' 00:06:22.676 killing process with pid 3748216 00:06:22.676 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3748216 00:06:22.676 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3748216 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3748469 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3748469 ']' 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3748469 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3748469 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3748469' 00:06:23.248 killing process with pid 3748469 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3748469 00:06:23.248 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3748469 00:06:23.510 00:06:23.510 real 0m2.869s 00:06:23.510 user 0m3.119s 00:06:23.510 sys 0m0.850s 00:06:23.510 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.510 01:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.510 ************************************ 00:06:23.510 END TEST non_locking_app_on_locked_coremask 00:06:23.510 ************************************ 00:06:23.510 01:24:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.510 01:24:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.510 01:24:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.510 01:24:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.510 ************************************ 00:06:23.510 START TEST locking_app_on_unlocked_coremask 00:06:23.510 ************************************ 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3748842 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3748842 /var/tmp/spdk.sock 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3748842 ']' 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.510 01:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.510 [2024-07-12 01:24:49.791182] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:23.510 [2024-07-12 01:24:49.791235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748842 ] 00:06:23.510 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.510 [2024-07-12 01:24:49.857216] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.510 [2024-07-12 01:24:49.857246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.771 [2024-07-12 01:24:49.886722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3749158 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3749158 /var/tmp/spdk2.sock 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3749158 ']' 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.342 01:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.342 [2024-07-12 01:24:50.598691] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:24.342 [2024-07-12 01:24:50.598748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749158 ] 00:06:24.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.602 [2024-07-12 01:24:50.700364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.602 [2024-07-12 01:24:50.763664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.197 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.197 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:25.197 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3749158 00:06:25.197 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3749158 00:06:25.197 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.768 lslocks: write error 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3748842 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3748842 ']' 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3748842 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3748842 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3748842' 00:06:25.768 killing process with pid 3748842 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3748842 00:06:25.768 01:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3748842 00:06:26.030 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3749158 00:06:26.030 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3749158 ']' 00:06:26.030 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3749158 00:06:26.030 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:26.290 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.290 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3749158 00:06:26.290 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.291 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.291 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3749158' 00:06:26.291 killing process with pid 3749158 00:06:26.291 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3749158 00:06:26.291 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3749158 00:06:26.291 00:06:26.291 real 0m2.896s 00:06:26.291 user 0m3.145s 00:06:26.291 sys 0m0.890s 00:06:26.291 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.291 01:24:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.291 ************************************ 00:06:26.291 END TEST locking_app_on_unlocked_coremask 00:06:26.291 ************************************ 00:06:26.551 01:24:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.551 01:24:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.551 01:24:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.551 01:24:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.551 ************************************ 00:06:26.551 START TEST locking_app_on_locked_coremask 00:06:26.551 ************************************ 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3749548 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3749548 /var/tmp/spdk.sock 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3749548 ']' 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.551 01:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.551 [2024-07-12 01:24:52.756557] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:26.551 [2024-07-12 01:24:52.756605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749548 ] 00:06:26.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.551 [2024-07-12 01:24:52.822862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.551 [2024-07-12 01:24:52.856673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3749661 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3749661 /var/tmp/spdk2.sock 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3749661 /var/tmp/spdk2.sock 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3749661 /var/tmp/spdk2.sock 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3749661 ']' 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.494 01:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.494 [2024-07-12 01:24:53.537459] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:27.494 [2024-07-12 01:24:53.537502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749661 ] 00:06:27.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.494 [2024-07-12 01:24:53.627400] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3749548 has claimed it. 00:06:27.495 [2024-07-12 01:24:53.627442] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3749661) - No such process 00:06:28.064 ERROR: process (pid: 3749661) is no longer running 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.064 lslocks: write error 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3749548 ']' 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3749548' 00:06:28.064 killing process with pid 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3749548 00:06:28.064 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3749548 00:06:28.324 00:06:28.324 real 0m1.800s 00:06:28.324 user 0m1.991s 00:06:28.324 sys 0m0.416s 00:06:28.324 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.324 01:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.324 ************************************ 00:06:28.324 END TEST locking_app_on_locked_coremask 00:06:28.324 ************************************ 00:06:28.324 01:24:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:28.324 01:24:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.324 01:24:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.324 01:24:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.324 ************************************ 00:06:28.324 START TEST locking_overlapped_coremask 00:06:28.324 ************************************ 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3749926 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3749926 /var/tmp/spdk.sock 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3749926 ']' 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.324 01:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.324 [2024-07-12 01:24:54.626780] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:28.324 [2024-07-12 01:24:54.626832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749926 ] 00:06:28.324 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.584 [2024-07-12 01:24:54.695864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.584 [2024-07-12 01:24:54.733707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.584 [2024-07-12 01:24:54.733829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.584 [2024-07-12 01:24:54.733832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3750132 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3750132 /var/tmp/spdk2.sock 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3750132 /var/tmp/spdk2.sock 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3750132 /var/tmp/spdk2.sock 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3750132 ']' 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.156 01:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.156 [2024-07-12 01:24:55.443664] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:29.156 [2024-07-12 01:24:55.443718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750132 ] 00:06:29.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.416 [2024-07-12 01:24:55.524113] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3749926 has claimed it. 00:06:29.416 [2024-07-12 01:24:55.524146] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3750132) - No such process 00:06:29.986 ERROR: process (pid: 3750132) is no longer running 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3749926 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3749926 ']' 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3749926 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3749926 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3749926' 00:06:29.986 killing process with pid 3749926 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3749926 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3749926 00:06:29.986 00:06:29.986 real 0m1.753s 00:06:29.986 user 0m5.093s 00:06:29.986 sys 0m0.365s 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.986 01:24:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.986 ************************************ 00:06:29.986 END TEST locking_overlapped_coremask 00:06:29.986 ************************************ 00:06:30.247 01:24:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.247 01:24:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.247 01:24:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.247 01:24:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.247 ************************************ 00:06:30.247 START TEST locking_overlapped_coremask_via_rpc 00:06:30.247 ************************************ 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3750302 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3750302 /var/tmp/spdk.sock 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3750302 ']' 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.247 01:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.247 [2024-07-12 01:24:56.457442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:30.248 [2024-07-12 01:24:56.457494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750302 ] 00:06:30.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.248 [2024-07-12 01:24:56.516464] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.248 [2024-07-12 01:24:56.516489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.248 [2024-07-12 01:24:56.548501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.248 [2024-07-12 01:24:56.548713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.248 [2024-07-12 01:24:56.548715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3750584 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3750584 /var/tmp/spdk2.sock 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3750584 ']' 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.190 01:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.190 [2024-07-12 01:24:57.283748] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:31.190 [2024-07-12 01:24:57.283802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750584 ] 00:06:31.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.190 [2024-07-12 01:24:57.364168] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.190 [2024-07-12 01:24:57.364196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.190 [2024-07-12 01:24:57.421776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.190 [2024-07-12 01:24:57.425350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.190 [2024-07-12 01:24:57.425353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.762 [2024-07-12 01:24:58.061291] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3750302 has claimed it. 00:06:31.762 request: 00:06:31.762 { 00:06:31.762 "method": "framework_enable_cpumask_locks", 00:06:31.762 "req_id": 1 00:06:31.762 } 00:06:31.762 Got JSON-RPC error response 00:06:31.762 response: 00:06:31.762 { 00:06:31.762 "code": -32603, 00:06:31.762 "message": "Failed to claim CPU core: 2" 00:06:31.762 } 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3750302 /var/tmp/spdk.sock 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3750302 ']' 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.762 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3750584 /var/tmp/spdk2.sock 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3750584 ']' 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.023 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.283 00:06:32.283 real 0m2.002s 00:06:32.283 user 0m0.770s 00:06:32.283 sys 0m0.152s 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.283 01:24:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.283 ************************************ 00:06:32.283 END TEST locking_overlapped_coremask_via_rpc 00:06:32.283 ************************************ 00:06:32.283 01:24:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.283 01:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3750302 ]] 00:06:32.283 01:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3750302 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3750302 ']' 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3750302 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3750302 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3750302' 00:06:32.283 killing process with pid 3750302 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3750302 00:06:32.283 01:24:58 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3750302 00:06:32.544 01:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3750584 ]] 00:06:32.544 01:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3750584 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3750584 ']' 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3750584 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3750584 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3750584' 00:06:32.544 killing process with pid 3750584 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3750584 00:06:32.544 01:24:58 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3750584 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3750302 ]] 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3750302 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3750302 ']' 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3750302 00:06:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3750302) - No such process 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3750302 is not found' 00:06:32.806 Process with pid 3750302 is not found 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3750584 ]] 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3750584 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3750584 ']' 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3750584 00:06:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3750584) - No such process 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3750584 is not found' 00:06:32.806 Process with pid 3750584 is not found 00:06:32.806 01:24:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.806 00:06:32.806 real 0m14.685s 00:06:32.806 user 0m26.103s 00:06:32.806 sys 0m4.222s 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.806 01:24:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.806 ************************************ 00:06:32.806 END TEST cpu_locks 00:06:32.806 ************************************ 00:06:32.806 00:06:32.806 real 0m39.725s 00:06:32.806 user 1m18.596s 00:06:32.806 sys 0m7.285s 00:06:32.806 01:24:58 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.806 01:24:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.806 ************************************ 00:06:32.806 END TEST event 00:06:32.806 ************************************ 00:06:32.806 01:24:59 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:32.806 01:24:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.806 01:24:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.806 01:24:59 -- common/autotest_common.sh@10 -- # set +x 00:06:32.806 ************************************ 00:06:32.806 START TEST thread 00:06:32.806 ************************************ 00:06:32.806 01:24:59 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:32.806 * Looking for test storage... 00:06:32.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:32.806 01:24:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.806 01:24:59 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:32.806 01:24:59 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.806 01:24:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.067 ************************************ 00:06:33.067 START TEST thread_poller_perf 00:06:33.067 ************************************ 00:06:33.067 01:24:59 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.067 [2024-07-12 01:24:59.188479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:33.067 [2024-07-12 01:24:59.188580] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751071 ] 00:06:33.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.067 [2024-07-12 01:24:59.248754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.067 [2024-07-12 01:24:59.278624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.067 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.009 ====================================== 00:06:34.009 busy:2405166786 (cyc) 00:06:34.009 total_run_count: 418000 00:06:34.009 tsc_hz: 2400000000 (cyc) 00:06:34.009 ====================================== 00:06:34.009 poller_cost: 5753 (cyc), 2397 (nsec) 00:06:34.009 00:06:34.009 real 0m1.152s 00:06:34.009 user 0m1.080s 00:06:34.009 sys 0m0.069s 00:06:34.009 01:25:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.009 01:25:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.009 ************************************ 00:06:34.009 END TEST thread_poller_perf 00:06:34.009 ************************************ 00:06:34.009 01:25:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.009 01:25:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:34.009 01:25:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.009 01:25:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.269 ************************************ 00:06:34.269 START TEST thread_poller_perf 00:06:34.269 ************************************ 00:06:34.269 01:25:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.269 [2024-07-12 01:25:00.410422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.269 [2024-07-12 01:25:00.410514] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751274 ] 00:06:34.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.269 [2024-07-12 01:25:00.468741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.269 [2024-07-12 01:25:00.496565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.269 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.211 ====================================== 00:06:35.211 busy:2401493172 (cyc) 00:06:35.211 total_run_count: 5566000 00:06:35.211 tsc_hz: 2400000000 (cyc) 00:06:35.211 ====================================== 00:06:35.211 poller_cost: 431 (cyc), 179 (nsec) 00:06:35.211 00:06:35.211 real 0m1.141s 00:06:35.211 user 0m1.072s 00:06:35.211 sys 0m0.066s 00:06:35.211 01:25:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.211 01:25:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.211 ************************************ 00:06:35.211 END TEST thread_poller_perf 00:06:35.211 ************************************ 00:06:35.211 01:25:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:35.211 00:06:35.211 real 0m2.524s 00:06:35.211 user 0m2.238s 00:06:35.211 sys 0m0.292s 00:06:35.211 01:25:01 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.211 01:25:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.211 ************************************ 00:06:35.211 END TEST thread 00:06:35.211 ************************************ 00:06:35.472 01:25:01 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:35.472 01:25:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.472 01:25:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.472 01:25:01 -- common/autotest_common.sh@10 -- # set +x 00:06:35.472 ************************************ 00:06:35.472 START TEST accel 00:06:35.472 ************************************ 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:35.472 * Looking for test storage... 00:06:35.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:35.472 01:25:01 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:35.472 01:25:01 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:35.472 01:25:01 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.472 01:25:01 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3751509 00:06:35.472 01:25:01 accel -- accel/accel.sh@63 -- # waitforlisten 3751509 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@827 -- # '[' -z 3751509 ']' 00:06:35.472 01:25:01 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:35.472 01:25:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.472 01:25:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.472 01:25:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.472 01:25:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.472 01:25:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.472 01:25:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.472 01:25:01 accel -- accel/accel.sh@41 -- # jq -r . 00:06:35.472 01:25:01 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.472 01:25:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.472 [2024-07-12 01:25:01.795092] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.472 [2024-07-12 01:25:01.795148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751509 ] 00:06:35.472 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.733 [2024-07-12 01:25:01.853593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.733 [2024-07-12 01:25:01.884377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.303 01:25:02 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.303 01:25:02 accel -- common/autotest_common.sh@860 -- # return 0 00:06:36.303 01:25:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:36.303 01:25:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:36.303 01:25:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:36.303 01:25:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:36.303 01:25:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:36.303 01:25:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:36.303 01:25:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:36.303 01:25:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.303 01:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.303 01:25:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.303 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.303 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.303 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.304 01:25:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.304 01:25:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.304 01:25:02 accel -- accel/accel.sh@75 -- # killprocess 3751509 00:06:36.304 01:25:02 accel -- common/autotest_common.sh@946 -- # '[' -z 3751509 ']' 00:06:36.304 01:25:02 accel -- common/autotest_common.sh@950 -- # kill -0 3751509 00:06:36.304 01:25:02 accel -- common/autotest_common.sh@951 -- # uname 00:06:36.304 01:25:02 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.304 01:25:02 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3751509 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3751509' 00:06:36.564 killing process with pid 3751509 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@965 -- # kill 3751509 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@970 -- # wait 3751509 00:06:36.564 01:25:02 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:36.564 01:25:02 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.564 01:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.564 01:25:02 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:36.564 01:25:02 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:36.823 01:25:02 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.823 01:25:02 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:36.823 01:25:02 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:36.823 01:25:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:36.823 01:25:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.823 01:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.823 ************************************ 00:06:36.823 START TEST accel_missing_filename 00:06:36.823 ************************************ 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.823 01:25:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:36.823 01:25:02 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:36.823 [2024-07-12 01:25:03.009624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:36.823 [2024-07-12 01:25:03.009721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751865 ] 00:06:36.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.823 [2024-07-12 01:25:03.077278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.823 [2024-07-12 01:25:03.114135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.823 [2024-07-12 01:25:03.145989] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.083 [2024-07-12 01:25:03.183121] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:37.083 A filename is required. 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.083 00:06:37.083 real 0m0.239s 00:06:37.083 user 0m0.175s 00:06:37.083 sys 0m0.104s 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.083 01:25:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:37.083 ************************************ 00:06:37.083 END TEST accel_missing_filename 00:06:37.083 ************************************ 00:06:37.083 01:25:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.083 01:25:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:37.083 01:25:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.083 01:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.083 ************************************ 00:06:37.083 START TEST accel_compress_verify 00:06:37.083 ************************************ 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.083 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:37.083 01:25:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:37.083 [2024-07-12 01:25:03.321047] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.083 [2024-07-12 01:25:03.321134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751892 ] 00:06:37.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.083 [2024-07-12 01:25:03.379961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.083 [2024-07-12 01:25:03.410061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.344 [2024-07-12 01:25:03.440590] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.344 [2024-07-12 01:25:03.475247] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:37.344 00:06:37.344 Compression does not support the verify option, aborting. 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.344 00:06:37.344 real 0m0.222s 00:06:37.344 user 0m0.159s 00:06:37.344 sys 0m0.105s 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.344 01:25:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:37.344 ************************************ 00:06:37.344 END TEST accel_compress_verify 00:06:37.344 ************************************ 00:06:37.344 01:25:03 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:37.344 01:25:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:37.344 01:25:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.344 01:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.344 ************************************ 00:06:37.344 START TEST accel_wrong_workload 00:06:37.345 ************************************ 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:37.345 01:25:03 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:37.345 Unsupported workload type: foobar 00:06:37.345 [2024-07-12 01:25:03.612878] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:37.345 accel_perf options: 00:06:37.345 [-h help message] 00:06:37.345 [-q queue depth per core] 00:06:37.345 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.345 [-T number of threads per core 00:06:37.345 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.345 [-t time in seconds] 00:06:37.345 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.345 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:37.345 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.345 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.345 [-S for crc32c workload, use this seed value (default 0) 00:06:37.345 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.345 [-f for fill workload, use this BYTE value (default 255) 00:06:37.345 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.345 [-y verify result if this switch is on] 00:06:37.345 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.345 Can be used to spread operations across a wider range of memory. 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.345 00:06:37.345 real 0m0.036s 00:06:37.345 user 0m0.023s 00:06:37.345 sys 0m0.013s 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.345 01:25:03 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:37.345 ************************************ 00:06:37.345 END TEST accel_wrong_workload 00:06:37.345 ************************************ 00:06:37.345 Error: writing output failed: Broken pipe 00:06:37.345 01:25:03 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.345 01:25:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:37.345 01:25:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.345 01:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.345 ************************************ 00:06:37.345 START TEST accel_negative_buffers 00:06:37.345 ************************************ 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.345 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:37.606 01:25:03 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:37.606 -x option must be non-negative. 00:06:37.606 [2024-07-12 01:25:03.723330] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:37.606 accel_perf options: 00:06:37.606 [-h help message] 00:06:37.606 [-q queue depth per core] 00:06:37.606 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.606 [-T number of threads per core 00:06:37.606 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.606 [-t time in seconds] 00:06:37.606 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.606 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:37.606 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.606 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.606 [-S for crc32c workload, use this seed value (default 0) 00:06:37.606 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.606 [-f for fill workload, use this BYTE value (default 255) 00:06:37.606 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.606 [-y verify result if this switch is on] 00:06:37.606 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.606 Can be used to spread operations across a wider range of memory. 00:06:37.606 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:37.606 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.606 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.606 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.606 00:06:37.606 real 0m0.035s 00:06:37.606 user 0m0.022s 00:06:37.606 sys 0m0.013s 00:06:37.606 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.606 01:25:03 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:37.606 ************************************ 00:06:37.606 END TEST accel_negative_buffers 00:06:37.606 ************************************ 00:06:37.606 Error: writing output failed: Broken pipe 00:06:37.606 01:25:03 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:37.606 01:25:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:37.606 01:25:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.606 01:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.606 ************************************ 00:06:37.606 START TEST accel_crc32c 00:06:37.606 ************************************ 00:06:37.606 01:25:03 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:37.606 01:25:03 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:37.606 [2024-07-12 01:25:03.825724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.607 [2024-07-12 01:25:03.825816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752129 ] 00:06:37.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.607 [2024-07-12 01:25:03.888210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.607 [2024-07-12 01:25:03.926818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.607 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.903 01:25:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.898 01:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.898 01:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.898 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.898 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:38.899 01:25:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.899 00:06:38.899 real 0m1.239s 00:06:38.899 user 0m1.140s 00:06:38.899 sys 0m0.112s 00:06:38.899 01:25:05 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.899 01:25:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 ************************************ 00:06:38.899 END TEST accel_crc32c 00:06:38.899 ************************************ 00:06:38.899 01:25:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:38.899 01:25:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:38.899 01:25:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.899 01:25:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 ************************************ 00:06:38.899 START TEST accel_crc32c_C2 00:06:38.899 ************************************ 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.899 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:38.899 [2024-07-12 01:25:05.136999] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.899 [2024-07-12 01:25:05.137080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752313 ] 00:06:38.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.899 [2024-07-12 01:25:05.197755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.899 [2024-07-12 01:25:05.227147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.159 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.160 01:25:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.100 00:06:40.100 real 0m1.226s 00:06:40.100 user 0m1.141s 00:06:40.100 sys 0m0.097s 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.100 01:25:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:40.100 ************************************ 00:06:40.100 END TEST accel_crc32c_C2 00:06:40.100 ************************************ 00:06:40.100 01:25:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:40.100 01:25:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:40.100 01:25:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.100 01:25:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.100 ************************************ 00:06:40.100 START TEST accel_copy 00:06:40.100 ************************************ 00:06:40.100 01:25:06 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:40.100 01:25:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:40.101 [2024-07-12 01:25:06.434964] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:40.101 [2024-07-12 01:25:06.435032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752660 ] 00:06:40.361 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.361 [2024-07-12 01:25:06.494074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.361 [2024-07-12 01:25:06.521193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.361 01:25:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:41.302 01:25:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.302 00:06:41.302 real 0m1.221s 00:06:41.302 user 0m1.123s 00:06:41.302 sys 0m0.109s 00:06:41.302 01:25:07 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.302 01:25:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.303 ************************************ 00:06:41.303 END TEST accel_copy 00:06:41.303 ************************************ 00:06:41.563 01:25:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.563 01:25:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:41.563 01:25:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.563 01:25:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.563 ************************************ 00:06:41.563 START TEST accel_fill 00:06:41.563 ************************************ 00:06:41.563 01:25:07 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:41.563 [2024-07-12 01:25:07.725957] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:41.563 [2024-07-12 01:25:07.726038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753007 ] 00:06:41.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.563 [2024-07-12 01:25:07.784918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.563 [2024-07-12 01:25:07.814505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 01:25:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:42.947 01:25:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.947 00:06:42.947 real 0m1.225s 00:06:42.947 user 0m1.132s 00:06:42.947 sys 0m0.104s 00:06:42.947 01:25:08 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.947 01:25:08 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:42.947 ************************************ 00:06:42.947 END TEST accel_fill 00:06:42.947 ************************************ 00:06:42.947 01:25:08 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:42.947 01:25:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:42.947 01:25:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.947 01:25:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.947 ************************************ 00:06:42.947 START TEST accel_copy_crc32c 00:06:42.947 ************************************ 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:42.947 01:25:08 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:42.947 [2024-07-12 01:25:09.022999] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.947 [2024-07-12 01:25:09.023088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753281 ] 00:06:42.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.947 [2024-07-12 01:25:09.084260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.947 [2024-07-12 01:25:09.117666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 01:25:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.889 00:06:43.889 real 0m1.233s 00:06:43.889 user 0m1.135s 00:06:43.889 sys 0m0.111s 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.889 01:25:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:43.889 ************************************ 00:06:43.889 END TEST accel_copy_crc32c 00:06:43.889 ************************************ 00:06:44.150 01:25:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:44.150 01:25:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:44.150 01:25:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.150 01:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.150 ************************************ 00:06:44.150 START TEST accel_copy_crc32c_C2 00:06:44.150 ************************************ 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:44.150 [2024-07-12 01:25:10.318060] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.150 [2024-07-12 01:25:10.318122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753430 ] 00:06:44.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.150 [2024-07-12 01:25:10.385022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.150 [2024-07-12 01:25:10.415461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.150 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.151 01:25:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.531 00:06:45.531 real 0m1.232s 00:06:45.531 user 0m1.133s 00:06:45.531 sys 0m0.112s 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.531 01:25:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:45.531 ************************************ 00:06:45.531 END TEST accel_copy_crc32c_C2 00:06:45.531 ************************************ 00:06:45.531 01:25:11 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:45.531 01:25:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:45.531 01:25:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.531 01:25:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.531 ************************************ 00:06:45.531 START TEST accel_dualcast 00:06:45.531 ************************************ 00:06:45.531 01:25:11 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:45.531 01:25:11 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:45.531 [2024-07-12 01:25:11.622549] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.531 [2024-07-12 01:25:11.622623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753746 ] 00:06:45.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.532 [2024-07-12 01:25:11.681482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.532 [2024-07-12 01:25:11.710823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.532 01:25:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.472 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:46.473 01:25:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.473 00:06:46.473 real 0m1.224s 00:06:46.473 user 0m1.139s 00:06:46.473 sys 0m0.096s 00:06:46.473 01:25:12 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.473 01:25:12 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:46.473 ************************************ 00:06:46.473 END TEST accel_dualcast 00:06:46.473 ************************************ 00:06:46.733 01:25:12 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:46.733 01:25:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:46.733 01:25:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.733 01:25:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.733 ************************************ 00:06:46.733 START TEST accel_compare 00:06:46.733 ************************************ 00:06:46.733 01:25:12 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:46.733 01:25:12 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:46.733 [2024-07-12 01:25:12.916670] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:46.733 [2024-07-12 01:25:12.916731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754095 ] 00:06:46.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.733 [2024-07-12 01:25:12.973299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.733 [2024-07-12 01:25:13.002081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.733 01:25:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:48.114 01:25:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.114 00:06:48.114 real 0m1.221s 00:06:48.114 user 0m1.132s 00:06:48.114 sys 0m0.100s 00:06:48.114 01:25:14 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.114 01:25:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:48.114 ************************************ 00:06:48.114 END TEST accel_compare 00:06:48.114 ************************************ 00:06:48.114 01:25:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:48.114 01:25:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.114 01:25:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.114 01:25:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.114 ************************************ 00:06:48.114 START TEST accel_xor 00:06:48.114 ************************************ 00:06:48.114 01:25:14 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:48.114 [2024-07-12 01:25:14.209200] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:48.114 [2024-07-12 01:25:14.209275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754433 ] 00:06:48.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.114 [2024-07-12 01:25:14.267044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.114 [2024-07-12 01:25:14.297633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.114 01:25:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.078 01:25:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.078 00:06:49.078 real 0m1.224s 00:06:49.078 user 0m1.136s 00:06:49.078 sys 0m0.100s 00:06:49.078 01:25:15 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.078 01:25:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:49.078 ************************************ 00:06:49.078 END TEST accel_xor 00:06:49.078 ************************************ 00:06:49.338 01:25:15 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:49.338 01:25:15 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:49.338 01:25:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.338 01:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.338 ************************************ 00:06:49.338 START TEST accel_xor 00:06:49.338 ************************************ 00:06:49.338 01:25:15 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:49.338 [2024-07-12 01:25:15.487034] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:49.338 [2024-07-12 01:25:15.487070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754582 ] 00:06:49.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.338 [2024-07-12 01:25:15.533658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.338 [2024-07-12 01:25:15.561743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.338 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.339 01:25:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:50.720 01:25:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.720 00:06:50.720 real 0m1.192s 00:06:50.720 user 0m1.123s 00:06:50.720 sys 0m0.082s 00:06:50.720 01:25:16 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.720 01:25:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:50.720 ************************************ 00:06:50.720 END TEST accel_xor 00:06:50.720 ************************************ 00:06:50.720 01:25:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:50.720 01:25:16 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:50.720 01:25:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.720 01:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.720 ************************************ 00:06:50.720 START TEST accel_dif_verify 00:06:50.720 ************************************ 00:06:50.720 01:25:16 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:50.720 [2024-07-12 01:25:16.765133] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.720 [2024-07-12 01:25:16.765194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754831 ] 00:06:50.720 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.720 [2024-07-12 01:25:16.821395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.720 [2024-07-12 01:25:16.849724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.720 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.721 01:25:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:51.660 01:25:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.660 00:06:51.660 real 0m1.219s 00:06:51.660 user 0m1.143s 00:06:51.660 sys 0m0.089s 00:06:51.660 01:25:17 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.660 01:25:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 ************************************ 00:06:51.660 END TEST accel_dif_verify 00:06:51.660 ************************************ 00:06:51.660 01:25:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:51.660 01:25:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:51.660 01:25:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.660 01:25:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.921 ************************************ 00:06:51.921 START TEST accel_dif_generate 00:06:51.921 ************************************ 00:06:51.921 01:25:18 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:51.921 [2024-07-12 01:25:18.054770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:51.921 [2024-07-12 01:25:18.054835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755179 ] 00:06:51.921 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.921 [2024-07-12 01:25:18.115182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.921 [2024-07-12 01:25:18.148426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 01:25:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.305 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:53.306 01:25:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.306 00:06:53.306 real 0m1.231s 00:06:53.306 user 0m1.144s 00:06:53.306 sys 0m0.100s 00:06:53.306 01:25:19 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.306 01:25:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:53.306 ************************************ 00:06:53.306 END TEST accel_dif_generate 00:06:53.306 ************************************ 00:06:53.306 01:25:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:53.306 01:25:19 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:53.306 01:25:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.306 01:25:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.306 ************************************ 00:06:53.306 START TEST accel_dif_generate_copy 00:06:53.306 ************************************ 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:53.306 [2024-07-12 01:25:19.361919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:53.306 [2024-07-12 01:25:19.362005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755529 ] 00:06:53.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.306 [2024-07-12 01:25:19.424810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.306 [2024-07-12 01:25:19.456514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.306 01:25:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.245 00:06:54.245 real 0m1.235s 00:06:54.245 user 0m1.139s 00:06:54.245 sys 0m0.107s 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.245 01:25:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.245 ************************************ 00:06:54.245 END TEST accel_dif_generate_copy 00:06:54.245 ************************************ 00:06:54.506 01:25:20 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:54.506 01:25:20 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.506 01:25:20 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:54.506 01:25:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.506 01:25:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.506 ************************************ 00:06:54.506 START TEST accel_comp 00:06:54.506 ************************************ 00:06:54.506 01:25:20 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:54.506 [2024-07-12 01:25:20.666308] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:54.506 [2024-07-12 01:25:20.666377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755745 ] 00:06:54.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.506 [2024-07-12 01:25:20.726836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.506 [2024-07-12 01:25:20.758054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.506 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.507 01:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.890 01:25:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:55.891 01:25:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.891 00:06:55.891 real 0m1.229s 00:06:55.891 user 0m1.142s 00:06:55.891 sys 0m0.100s 00:06:55.891 01:25:21 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.891 01:25:21 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:55.891 ************************************ 00:06:55.891 END TEST accel_comp 00:06:55.891 ************************************ 00:06:55.891 01:25:21 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.891 01:25:21 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:55.891 01:25:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.891 01:25:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.891 ************************************ 00:06:55.891 START TEST accel_decomp 00:06:55.891 ************************************ 00:06:55.891 01:25:21 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:55.891 01:25:21 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:55.891 [2024-07-12 01:25:21.965535] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:55.891 [2024-07-12 01:25:21.965606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755927 ] 00:06:55.891 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.891 [2024-07-12 01:25:22.023449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.891 [2024-07-12 01:25:22.051556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.891 01:25:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.832 01:25:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.832 00:06:56.832 real 0m1.222s 00:06:56.832 user 0m1.133s 00:06:56.833 sys 0m0.102s 00:06:56.833 01:25:23 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.833 01:25:23 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:56.833 ************************************ 00:06:56.833 END TEST accel_decomp 00:06:56.833 ************************************ 00:06:57.093 01:25:23 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.093 01:25:23 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:57.093 01:25:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.093 01:25:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.093 ************************************ 00:06:57.093 START TEST accel_decmop_full 00:06:57.093 ************************************ 00:06:57.093 01:25:23 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:57.093 [2024-07-12 01:25:23.257995] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:57.093 [2024-07-12 01:25:23.258053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756264 ] 00:06:57.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.093 [2024-07-12 01:25:23.314377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.093 [2024-07-12 01:25:23.342779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.093 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.094 01:25:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.476 01:25:24 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.476 00:06:58.476 real 0m1.231s 00:06:58.476 user 0m1.154s 00:06:58.476 sys 0m0.090s 00:06:58.476 01:25:24 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.476 01:25:24 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:58.476 ************************************ 00:06:58.476 END TEST accel_decmop_full 00:06:58.476 ************************************ 00:06:58.476 01:25:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.476 01:25:24 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:58.476 01:25:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.476 01:25:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.476 ************************************ 00:06:58.476 START TEST accel_decomp_mcore 00:06:58.476 ************************************ 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:58.476 [2024-07-12 01:25:24.553688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:58.476 [2024-07-12 01:25:24.553759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756611 ] 00:06:58.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.476 [2024-07-12 01:25:24.612206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.476 [2024-07-12 01:25:24.643530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.476 [2024-07-12 01:25:24.643650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.476 [2024-07-12 01:25:24.643806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.476 [2024-07-12 01:25:24.643807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.476 01:25:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.416 00:06:59.416 real 0m1.234s 00:06:59.416 user 0m4.357s 00:06:59.416 sys 0m0.110s 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.416 01:25:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:59.416 ************************************ 00:06:59.416 END TEST accel_decomp_mcore 00:06:59.416 ************************************ 00:06:59.678 01:25:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.678 01:25:25 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:59.678 01:25:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.678 01:25:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.678 ************************************ 00:06:59.678 START TEST accel_decomp_full_mcore 00:06:59.678 ************************************ 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:59.678 01:25:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:59.678 [2024-07-12 01:25:25.863609] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.678 [2024-07-12 01:25:25.863687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756923 ] 00:06:59.678 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.678 [2024-07-12 01:25:25.933065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.678 [2024-07-12 01:25:25.970963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.678 [2024-07-12 01:25:25.971088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.678 [2024-07-12 01:25:25.971270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.678 [2024-07-12 01:25:25.971288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.678 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.679 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.679 01:25:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.062 00:07:01.062 real 0m1.267s 00:07:01.062 user 0m4.416s 00:07:01.062 sys 0m0.122s 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.062 01:25:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:01.062 ************************************ 00:07:01.062 END TEST accel_decomp_full_mcore 00:07:01.062 ************************************ 00:07:01.062 01:25:27 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.062 01:25:27 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:01.062 01:25:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.062 01:25:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.062 ************************************ 00:07:01.062 START TEST accel_decomp_mthread 00:07:01.062 ************************************ 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.062 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:01.063 [2024-07-12 01:25:27.204581] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.063 [2024-07-12 01:25:27.204645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757101 ] 00:07:01.063 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.063 [2024-07-12 01:25:27.263709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.063 [2024-07-12 01:25:27.297075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.063 01:25:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.448 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.449 00:07:02.449 real 0m1.233s 00:07:02.449 user 0m1.146s 00:07:02.449 sys 0m0.101s 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.449 01:25:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:02.449 ************************************ 00:07:02.449 END TEST accel_decomp_mthread 00:07:02.449 ************************************ 00:07:02.449 01:25:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.449 01:25:28 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:02.449 01:25:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.449 01:25:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.449 ************************************ 00:07:02.449 START TEST accel_decomp_full_mthread 00:07:02.449 ************************************ 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:02.449 [2024-07-12 01:25:28.513907] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.449 [2024-07-12 01:25:28.514011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757356 ] 00:07:02.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.449 [2024-07-12 01:25:28.579513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.449 [2024-07-12 01:25:28.607913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.449 01:25:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.391 00:07:03.391 real 0m1.253s 00:07:03.391 user 0m1.159s 00:07:03.391 sys 0m0.108s 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.391 01:25:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:03.391 ************************************ 00:07:03.391 END TEST accel_decomp_full_mthread 00:07:03.391 ************************************ 00:07:03.652 01:25:29 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:03.652 01:25:29 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:03.652 01:25:29 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:03.652 01:25:29 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:03.652 01:25:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.652 01:25:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.652 01:25:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.652 01:25:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.652 01:25:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.652 01:25:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.652 01:25:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.652 01:25:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:03.652 01:25:29 accel -- accel/accel.sh@41 -- # jq -r . 00:07:03.652 ************************************ 00:07:03.652 START TEST accel_dif_functional_tests 00:07:03.652 ************************************ 00:07:03.652 01:25:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:03.652 [2024-07-12 01:25:29.859053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.652 [2024-07-12 01:25:29.859097] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757712 ] 00:07:03.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.652 [2024-07-12 01:25:29.914124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.652 [2024-07-12 01:25:29.943612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.652 [2024-07-12 01:25:29.943731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.652 [2024-07-12 01:25:29.943732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.652 00:07:03.652 00:07:03.652 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.652 http://cunit.sourceforge.net/ 00:07:03.652 00:07:03.652 00:07:03.652 Suite: accel_dif 00:07:03.652 Test: verify: DIF generated, GUARD check ...passed 00:07:03.652 Test: verify: DIF generated, APPTAG check ...passed 00:07:03.652 Test: verify: DIF generated, REFTAG check ...passed 00:07:03.652 Test: verify: DIF not generated, GUARD check ...[2024-07-12 01:25:29.989292] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:03.652 passed 00:07:03.652 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 01:25:29.989334] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:03.652 passed 00:07:03.653 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 01:25:29.989352] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:03.653 passed 00:07:03.653 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:03.653 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 01:25:29.989393] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:03.653 passed 00:07:03.653 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:03.653 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:03.653 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:03.653 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 01:25:29.989485] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:03.653 passed 00:07:03.653 Test: verify copy: DIF generated, GUARD check ...passed 00:07:03.653 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:03.653 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:03.653 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 01:25:29.989590] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:03.653 passed 00:07:03.653 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 01:25:29.989612] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:03.653 passed 00:07:03.653 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 01:25:29.989633] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:03.653 passed 00:07:03.653 Test: generate copy: DIF generated, GUARD check ...passed 00:07:03.653 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:03.653 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:03.653 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:03.653 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:03.653 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:03.653 Test: generate copy: iovecs-len validate ...[2024-07-12 01:25:29.989799] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:03.653 passed 00:07:03.653 Test: generate copy: buffer alignment validate ...passed 00:07:03.653 00:07:03.653 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.653 suites 1 1 n/a 0 0 00:07:03.653 tests 26 26 26 0 0 00:07:03.653 asserts 115 115 115 0 n/a 00:07:03.653 00:07:03.653 Elapsed time = 0.002 seconds 00:07:03.913 00:07:03.913 real 0m0.270s 00:07:03.913 user 0m0.379s 00:07:03.913 sys 0m0.126s 00:07:03.913 01:25:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.913 01:25:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:03.913 ************************************ 00:07:03.913 END TEST accel_dif_functional_tests 00:07:03.913 ************************************ 00:07:03.913 00:07:03.913 real 0m28.481s 00:07:03.913 user 0m32.219s 00:07:03.913 sys 0m3.963s 00:07:03.913 01:25:30 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.913 01:25:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.913 ************************************ 00:07:03.913 END TEST accel 00:07:03.913 ************************************ 00:07:03.913 01:25:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:03.913 01:25:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.913 01:25:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.913 01:25:30 -- common/autotest_common.sh@10 -- # set +x 00:07:03.913 ************************************ 00:07:03.913 START TEST accel_rpc 00:07:03.913 ************************************ 00:07:03.913 01:25:30 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:04.174 * Looking for test storage... 00:07:04.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3757780 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3757780 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3757780 ']' 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.174 [2024-07-12 01:25:30.351032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:04.174 [2024-07-12 01:25:30.351083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757780 ] 00:07:04.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.174 [2024-07-12 01:25:30.407200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.174 [2024-07-12 01:25:30.435794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:04.174 01:25:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.174 01:25:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.174 ************************************ 00:07:04.174 START TEST accel_assign_opcode 00:07:04.174 ************************************ 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.174 [2024-07-12 01:25:30.516243] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.174 [2024-07-12 01:25:30.524252] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.174 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.435 software 00:07:04.435 00:07:04.435 real 0m0.184s 00:07:04.435 user 0m0.046s 00:07:04.435 sys 0m0.012s 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.435 01:25:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.435 ************************************ 00:07:04.435 END TEST accel_assign_opcode 00:07:04.435 ************************************ 00:07:04.435 01:25:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3757780 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3757780 ']' 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3757780 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3757780 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3757780' 00:07:04.435 killing process with pid 3757780 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@965 -- # kill 3757780 00:07:04.435 01:25:30 accel_rpc -- common/autotest_common.sh@970 -- # wait 3757780 00:07:04.696 00:07:04.696 real 0m0.762s 00:07:04.696 user 0m0.743s 00:07:04.696 sys 0m0.369s 00:07:04.696 01:25:30 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.696 01:25:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.696 ************************************ 00:07:04.696 END TEST accel_rpc 00:07:04.696 ************************************ 00:07:04.696 01:25:31 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.696 01:25:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.696 01:25:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.696 01:25:31 -- common/autotest_common.sh@10 -- # set +x 00:07:04.696 ************************************ 00:07:04.696 START TEST app_cmdline 00:07:04.696 ************************************ 00:07:04.696 01:25:31 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.957 * Looking for test storage... 00:07:04.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.957 01:25:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.957 01:25:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3758174 00:07:04.957 01:25:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3758174 00:07:04.957 01:25:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.957 01:25:31 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3758174 ']' 00:07:04.957 01:25:31 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.957 01:25:31 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.957 01:25:31 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.957 01:25:31 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.957 01:25:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.957 [2024-07-12 01:25:31.192219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:04.957 [2024-07-12 01:25:31.192288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758174 ] 00:07:04.957 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.957 [2024-07-12 01:25:31.249887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.957 [2024-07-12 01:25:31.278386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.907 01:25:31 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.907 01:25:31 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:05.907 01:25:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:05.907 { 00:07:05.907 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:05.907 "fields": { 00:07:05.907 "major": 24, 00:07:05.907 "minor": 5, 00:07:05.907 "patch": 1, 00:07:05.907 "suffix": "-pre", 00:07:05.907 "commit": "5fa2f5086" 00:07:05.907 } 00:07:05.907 } 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.907 01:25:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:05.907 01:25:32 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.168 request: 00:07:06.168 { 00:07:06.168 "method": "env_dpdk_get_mem_stats", 00:07:06.168 "req_id": 1 00:07:06.168 } 00:07:06.168 Got JSON-RPC error response 00:07:06.168 response: 00:07:06.168 { 00:07:06.168 "code": -32601, 00:07:06.168 "message": "Method not found" 00:07:06.168 } 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.168 01:25:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3758174 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3758174 ']' 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3758174 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3758174 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3758174' 00:07:06.168 killing process with pid 3758174 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@965 -- # kill 3758174 00:07:06.168 01:25:32 app_cmdline -- common/autotest_common.sh@970 -- # wait 3758174 00:07:06.429 00:07:06.429 real 0m1.538s 00:07:06.429 user 0m1.873s 00:07:06.429 sys 0m0.395s 00:07:06.429 01:25:32 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.429 01:25:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 ************************************ 00:07:06.429 END TEST app_cmdline 00:07:06.429 ************************************ 00:07:06.429 01:25:32 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:06.429 01:25:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.429 01:25:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.429 01:25:32 -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 ************************************ 00:07:06.429 START TEST version 00:07:06.429 ************************************ 00:07:06.429 01:25:32 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:06.429 * Looking for test storage... 00:07:06.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.429 01:25:32 version -- app/version.sh@17 -- # get_header_version major 00:07:06.429 01:25:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # cut -f2 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.429 01:25:32 version -- app/version.sh@17 -- # major=24 00:07:06.429 01:25:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:06.429 01:25:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # cut -f2 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.429 01:25:32 version -- app/version.sh@18 -- # minor=5 00:07:06.429 01:25:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:06.429 01:25:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # cut -f2 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.429 01:25:32 version -- app/version.sh@19 -- # patch=1 00:07:06.429 01:25:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:06.429 01:25:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # cut -f2 00:07:06.429 01:25:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.689 01:25:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:06.689 01:25:32 version -- app/version.sh@22 -- # version=24.5 00:07:06.689 01:25:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:06.689 01:25:32 version -- app/version.sh@25 -- # version=24.5.1 00:07:06.689 01:25:32 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:06.689 01:25:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.689 01:25:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:06.689 01:25:32 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:06.689 01:25:32 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:06.689 00:07:06.689 real 0m0.175s 00:07:06.689 user 0m0.084s 00:07:06.689 sys 0m0.125s 00:07:06.689 01:25:32 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.689 01:25:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:06.689 ************************************ 00:07:06.689 END TEST version 00:07:06.690 ************************************ 00:07:06.690 01:25:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@198 -- # uname -s 00:07:06.690 01:25:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:06.690 01:25:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:06.690 01:25:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:06.690 01:25:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:06.690 01:25:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.690 01:25:32 -- common/autotest_common.sh@10 -- # set +x 00:07:06.690 01:25:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:06.690 01:25:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:06.690 01:25:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.690 01:25:32 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:06.690 01:25:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.690 01:25:32 -- common/autotest_common.sh@10 -- # set +x 00:07:06.690 ************************************ 00:07:06.690 START TEST nvmf_tcp 00:07:06.690 ************************************ 00:07:06.690 01:25:32 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.690 * Looking for test storage... 00:07:06.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.950 01:25:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.950 01:25:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.950 01:25:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.950 01:25:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:06.950 01:25:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:06.950 01:25:33 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:06.950 01:25:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:06.950 01:25:33 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:06.950 01:25:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:06.950 01:25:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.950 01:25:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.950 ************************************ 00:07:06.950 START TEST nvmf_example 00:07:06.950 ************************************ 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:06.950 * Looking for test storage... 00:07:06.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.950 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.951 01:25:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:15.203 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:15.203 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:15.203 Found net devices under 0000:31:00.0: cvl_0_0 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:15.203 Found net devices under 0000:31:00.1: cvl_0_1 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:07:15.203 00:07:15.203 --- 10.0.0.2 ping statistics --- 00:07:15.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.203 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:07:15.203 00:07:15.203 --- 10.0.0.1 ping statistics --- 00:07:15.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.203 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.203 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3762933 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3762933 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3762933 ']' 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.204 01:25:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.144 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:16.145 01:25:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:16.145 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.371 Initializing NVMe Controllers 00:07:28.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.371 Initialization complete. Launching workers. 00:07:28.371 ======================================================== 00:07:28.371 Latency(us) 00:07:28.371 Device Information : IOPS MiB/s Average min max 00:07:28.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18920.71 73.91 3382.68 665.23 41259.41 00:07:28.371 ======================================================== 00:07:28.371 Total : 18920.71 73.91 3382.68 665.23 41259.41 00:07:28.371 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.371 rmmod nvme_tcp 00:07:28.371 rmmod nvme_fabrics 00:07:28.371 rmmod nvme_keyring 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3762933 ']' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3762933 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3762933 ']' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3762933 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3762933 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3762933' 00:07:28.371 killing process with pid 3762933 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3762933 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3762933 00:07:28.371 nvmf threads initialize successfully 00:07:28.371 bdev subsystem init successfully 00:07:28.371 created a nvmf target service 00:07:28.371 create targets's poll groups done 00:07:28.371 all subsystems of target started 00:07:28.371 nvmf target is running 00:07:28.371 all subsystems of target stopped 00:07:28.371 destroy targets's poll groups done 00:07:28.371 destroyed the nvmf target service 00:07:28.371 bdev subsystem finish successfully 00:07:28.371 nvmf threads destroy successfully 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.371 01:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.632 01:25:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.632 01:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:28.632 01:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.632 01:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 00:07:28.632 real 0m21.800s 00:07:28.632 user 0m46.685s 00:07:28.632 sys 0m7.129s 00:07:28.632 01:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.632 01:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 ************************************ 00:07:28.632 END TEST nvmf_example 00:07:28.632 ************************************ 00:07:28.632 01:25:54 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:28.632 01:25:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:28.632 01:25:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.632 01:25:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 ************************************ 00:07:28.632 START TEST nvmf_filesystem 00:07:28.632 ************************************ 00:07:28.632 01:25:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:28.895 * Looking for test storage... 00:07:28.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:28.896 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:28.897 #define SPDK_CONFIG_H 00:07:28.897 #define SPDK_CONFIG_APPS 1 00:07:28.897 #define SPDK_CONFIG_ARCH native 00:07:28.897 #undef SPDK_CONFIG_ASAN 00:07:28.897 #undef SPDK_CONFIG_AVAHI 00:07:28.897 #undef SPDK_CONFIG_CET 00:07:28.897 #define SPDK_CONFIG_COVERAGE 1 00:07:28.897 #define SPDK_CONFIG_CROSS_PREFIX 00:07:28.897 #undef SPDK_CONFIG_CRYPTO 00:07:28.897 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:28.897 #undef SPDK_CONFIG_CUSTOMOCF 00:07:28.897 #undef SPDK_CONFIG_DAOS 00:07:28.897 #define SPDK_CONFIG_DAOS_DIR 00:07:28.897 #define SPDK_CONFIG_DEBUG 1 00:07:28.897 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:28.897 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:28.897 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:28.897 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:28.897 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:28.897 #undef SPDK_CONFIG_DPDK_UADK 00:07:28.897 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:28.897 #define SPDK_CONFIG_EXAMPLES 1 00:07:28.897 #undef SPDK_CONFIG_FC 00:07:28.897 #define SPDK_CONFIG_FC_PATH 00:07:28.897 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:28.897 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:28.897 #undef SPDK_CONFIG_FUSE 00:07:28.897 #undef SPDK_CONFIG_FUZZER 00:07:28.897 #define SPDK_CONFIG_FUZZER_LIB 00:07:28.897 #undef SPDK_CONFIG_GOLANG 00:07:28.897 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:28.897 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:28.897 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:28.897 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:28.897 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:28.897 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:28.897 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:28.897 #define SPDK_CONFIG_IDXD 1 00:07:28.897 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:28.897 #undef SPDK_CONFIG_IPSEC_MB 00:07:28.897 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:28.897 #define SPDK_CONFIG_ISAL 1 00:07:28.897 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:28.897 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:28.897 #define SPDK_CONFIG_LIBDIR 00:07:28.897 #undef SPDK_CONFIG_LTO 00:07:28.897 #define SPDK_CONFIG_MAX_LCORES 00:07:28.897 #define SPDK_CONFIG_NVME_CUSE 1 00:07:28.897 #undef SPDK_CONFIG_OCF 00:07:28.897 #define SPDK_CONFIG_OCF_PATH 00:07:28.897 #define SPDK_CONFIG_OPENSSL_PATH 00:07:28.897 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:28.897 #define SPDK_CONFIG_PGO_DIR 00:07:28.897 #undef SPDK_CONFIG_PGO_USE 00:07:28.897 #define SPDK_CONFIG_PREFIX /usr/local 00:07:28.897 #undef SPDK_CONFIG_RAID5F 00:07:28.897 #undef SPDK_CONFIG_RBD 00:07:28.897 #define SPDK_CONFIG_RDMA 1 00:07:28.897 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:28.897 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:28.897 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:28.897 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:28.897 #define SPDK_CONFIG_SHARED 1 00:07:28.897 #undef SPDK_CONFIG_SMA 00:07:28.897 #define SPDK_CONFIG_TESTS 1 00:07:28.897 #undef SPDK_CONFIG_TSAN 00:07:28.897 #define SPDK_CONFIG_UBLK 1 00:07:28.897 #define SPDK_CONFIG_UBSAN 1 00:07:28.897 #undef SPDK_CONFIG_UNIT_TESTS 00:07:28.897 #undef SPDK_CONFIG_URING 00:07:28.897 #define SPDK_CONFIG_URING_PATH 00:07:28.897 #undef SPDK_CONFIG_URING_ZNS 00:07:28.897 #undef SPDK_CONFIG_USDT 00:07:28.897 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:28.897 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:28.897 #define SPDK_CONFIG_VFIO_USER 1 00:07:28.897 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:28.897 #define SPDK_CONFIG_VHOST 1 00:07:28.897 #define SPDK_CONFIG_VIRTIO 1 00:07:28.897 #undef SPDK_CONFIG_VTUNE 00:07:28.897 #define SPDK_CONFIG_VTUNE_DIR 00:07:28.897 #define SPDK_CONFIG_WERROR 1 00:07:28.897 #define SPDK_CONFIG_WPDK_DIR 00:07:28.897 #undef SPDK_CONFIG_XNVME 00:07:28.897 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:28.897 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:28.898 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3765754 ]] 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3765754 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:28.899 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.caLYWH 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.caLYWH/tests/target /tmp/spdk.caLYWH 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=956157952 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4328271872 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=121290272768 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129370980352 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8080707584 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64680779776 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685490176 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25864253440 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874198528 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9945088 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=179200 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=324608 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64684101632 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685490176 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1388544 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12937093120 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937097216 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:28.900 * Looking for test storage... 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=121290272768 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10295300096 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.900 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.901 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.901 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.901 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.162 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.163 01:25:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:37.332 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:37.332 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:37.332 Found net devices under 0000:31:00.0: cvl_0_0 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:37.332 Found net devices under 0000:31:00.1: cvl_0_1 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.332 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:07:37.333 00:07:37.333 --- 10.0.0.2 ping statistics --- 00:07:37.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.333 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:07:37.333 00:07:37.333 --- 10.0.0.1 ping statistics --- 00:07:37.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.333 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 ************************************ 00:07:37.333 START TEST nvmf_filesystem_no_in_capsule 00:07:37.333 ************************************ 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3770052 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3770052 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3770052 ']' 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.333 01:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 [2024-07-12 01:26:03.629958] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:37.333 [2024-07-12 01:26:03.630012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.333 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.593 [2024-07-12 01:26:03.703050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.593 [2024-07-12 01:26:03.739637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.593 [2024-07-12 01:26:03.739673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.593 [2024-07-12 01:26:03.739681] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.593 [2024-07-12 01:26:03.739687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.593 [2024-07-12 01:26:03.739693] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.593 [2024-07-12 01:26:03.739833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.593 [2024-07-12 01:26:03.739957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.593 [2024-07-12 01:26:03.740112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.593 [2024-07-12 01:26:03.740113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.164 [2024-07-12 01:26:04.443811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.164 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 Malloc1 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 [2024-07-12 01:26:04.574680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:38.425 { 00:07:38.425 "name": "Malloc1", 00:07:38.425 "aliases": [ 00:07:38.425 "7934b673-b1ab-425b-b761-4fd9f887469d" 00:07:38.425 ], 00:07:38.425 "product_name": "Malloc disk", 00:07:38.425 "block_size": 512, 00:07:38.425 "num_blocks": 1048576, 00:07:38.425 "uuid": "7934b673-b1ab-425b-b761-4fd9f887469d", 00:07:38.425 "assigned_rate_limits": { 00:07:38.425 "rw_ios_per_sec": 0, 00:07:38.425 "rw_mbytes_per_sec": 0, 00:07:38.425 "r_mbytes_per_sec": 0, 00:07:38.425 "w_mbytes_per_sec": 0 00:07:38.425 }, 00:07:38.425 "claimed": true, 00:07:38.425 "claim_type": "exclusive_write", 00:07:38.425 "zoned": false, 00:07:38.425 "supported_io_types": { 00:07:38.425 "read": true, 00:07:38.425 "write": true, 00:07:38.425 "unmap": true, 00:07:38.425 "write_zeroes": true, 00:07:38.425 "flush": true, 00:07:38.425 "reset": true, 00:07:38.425 "compare": false, 00:07:38.425 "compare_and_write": false, 00:07:38.425 "abort": true, 00:07:38.425 "nvme_admin": false, 00:07:38.425 "nvme_io": false 00:07:38.425 }, 00:07:38.425 "memory_domains": [ 00:07:38.425 { 00:07:38.425 "dma_device_id": "system", 00:07:38.425 "dma_device_type": 1 00:07:38.425 }, 00:07:38.425 { 00:07:38.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.425 "dma_device_type": 2 00:07:38.425 } 00:07:38.425 ], 00:07:38.425 "driver_specific": {} 00:07:38.425 } 00:07:38.425 ]' 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:38.425 01:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:39.810 01:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.810 01:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:39.810 01:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.810 01:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:39.810 01:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:42.354 01:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.296 ************************************ 00:07:43.296 START TEST filesystem_ext4 00:07:43.296 ************************************ 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:43.296 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:43.296 mke2fs 1.46.5 (30-Dec-2021) 00:07:43.296 Discarding device blocks: 0/522240 done 00:07:43.296 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:43.296 Filesystem UUID: a3350169-29bd-4634-b072-33a105e094e4 00:07:43.296 Superblock backups stored on blocks: 00:07:43.296 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:43.296 00:07:43.296 Allocating group tables: 0/64 done 00:07:43.296 Writing inode tables: 0/64 done 00:07:43.557 Creating journal (8192 blocks): done 00:07:43.557 Writing superblocks and filesystem accounting information: 0/64 done 00:07:43.557 00:07:43.557 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:43.557 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.557 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3770052 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.817 01:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.817 00:07:43.817 real 0m0.471s 00:07:43.817 user 0m0.025s 00:07:43.817 sys 0m0.053s 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:43.817 ************************************ 00:07:43.817 END TEST filesystem_ext4 00:07:43.817 ************************************ 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.817 ************************************ 00:07:43.817 START TEST filesystem_btrfs 00:07:43.817 ************************************ 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:43.817 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:43.818 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:43.818 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:43.818 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:44.078 btrfs-progs v6.6.2 00:07:44.078 See https://btrfs.readthedocs.io for more information. 00:07:44.078 00:07:44.078 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:44.078 NOTE: several default settings have changed in version 5.15, please make sure 00:07:44.078 this does not affect your deployments: 00:07:44.078 - DUP for metadata (-m dup) 00:07:44.078 - enabled no-holes (-O no-holes) 00:07:44.078 - enabled free-space-tree (-R free-space-tree) 00:07:44.078 00:07:44.078 Label: (null) 00:07:44.078 UUID: 67068e50-6028-4803-929d-3642a17b44e9 00:07:44.078 Node size: 16384 00:07:44.078 Sector size: 4096 00:07:44.078 Filesystem size: 510.00MiB 00:07:44.078 Block group profiles: 00:07:44.078 Data: single 8.00MiB 00:07:44.078 Metadata: DUP 32.00MiB 00:07:44.078 System: DUP 8.00MiB 00:07:44.078 SSD detected: yes 00:07:44.078 Zoned device: no 00:07:44.078 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:44.078 Runtime features: free-space-tree 00:07:44.078 Checksum: crc32c 00:07:44.078 Number of devices: 1 00:07:44.078 Devices: 00:07:44.078 ID SIZE PATH 00:07:44.078 1 510.00MiB /dev/nvme0n1p1 00:07:44.078 00:07:44.078 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:44.078 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3770052 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.339 00:07:44.339 real 0m0.542s 00:07:44.339 user 0m0.017s 00:07:44.339 sys 0m0.071s 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.339 ************************************ 00:07:44.339 END TEST filesystem_btrfs 00:07:44.339 ************************************ 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.339 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.600 ************************************ 00:07:44.600 START TEST filesystem_xfs 00:07:44.600 ************************************ 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:44.600 01:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.600 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.600 = sectsz=512 attr=2, projid32bit=1 00:07:44.600 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.600 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.600 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.600 = sunit=0 swidth=0 blks 00:07:44.600 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.600 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.600 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.600 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.541 Discarding blocks...Done. 00:07:45.541 01:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:45.541 01:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3770052 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.083 00:07:48.083 real 0m3.494s 00:07:48.083 user 0m0.031s 00:07:48.083 sys 0m0.051s 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.083 ************************************ 00:07:48.083 END TEST filesystem_xfs 00:07:48.083 ************************************ 00:07:48.083 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3770052 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3770052 ']' 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3770052 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.343 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3770052 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3770052' 00:07:48.604 killing process with pid 3770052 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3770052 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3770052 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.604 00:07:48.604 real 0m11.350s 00:07:48.604 user 0m44.796s 00:07:48.604 sys 0m1.034s 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.604 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.604 ************************************ 00:07:48.604 END TEST nvmf_filesystem_no_in_capsule 00:07:48.604 ************************************ 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.864 ************************************ 00:07:48.864 START TEST nvmf_filesystem_in_capsule 00:07:48.864 ************************************ 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.864 01:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3772401 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3772401 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3772401 ']' 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.864 [2024-07-12 01:26:15.035608] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:48.864 [2024-07-12 01:26:15.035644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.864 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.864 [2024-07-12 01:26:15.096943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.864 [2024-07-12 01:26:15.128548] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.864 [2024-07-12 01:26:15.128587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.864 [2024-07-12 01:26:15.128595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.864 [2024-07-12 01:26:15.128602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.864 [2024-07-12 01:26:15.128608] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.864 [2024-07-12 01:26:15.128749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.864 [2024-07-12 01:26:15.128882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.864 [2024-07-12 01:26:15.129047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.864 [2024-07-12 01:26:15.129047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:48.864 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 [2024-07-12 01:26:15.267050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 [2024-07-12 01:26:15.397540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:49.125 { 00:07:49.125 "name": "Malloc1", 00:07:49.125 "aliases": [ 00:07:49.125 "a835cafd-f9b8-4892-8c3a-7bef89708bc4" 00:07:49.125 ], 00:07:49.125 "product_name": "Malloc disk", 00:07:49.125 "block_size": 512, 00:07:49.125 "num_blocks": 1048576, 00:07:49.125 "uuid": "a835cafd-f9b8-4892-8c3a-7bef89708bc4", 00:07:49.125 "assigned_rate_limits": { 00:07:49.125 "rw_ios_per_sec": 0, 00:07:49.125 "rw_mbytes_per_sec": 0, 00:07:49.125 "r_mbytes_per_sec": 0, 00:07:49.125 "w_mbytes_per_sec": 0 00:07:49.125 }, 00:07:49.125 "claimed": true, 00:07:49.125 "claim_type": "exclusive_write", 00:07:49.125 "zoned": false, 00:07:49.125 "supported_io_types": { 00:07:49.125 "read": true, 00:07:49.125 "write": true, 00:07:49.125 "unmap": true, 00:07:49.125 "write_zeroes": true, 00:07:49.125 "flush": true, 00:07:49.125 "reset": true, 00:07:49.125 "compare": false, 00:07:49.125 "compare_and_write": false, 00:07:49.125 "abort": true, 00:07:49.125 "nvme_admin": false, 00:07:49.125 "nvme_io": false 00:07:49.125 }, 00:07:49.125 "memory_domains": [ 00:07:49.125 { 00:07:49.125 "dma_device_id": "system", 00:07:49.125 "dma_device_type": 1 00:07:49.125 }, 00:07:49.125 { 00:07:49.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.125 "dma_device_type": 2 00:07:49.125 } 00:07:49.125 ], 00:07:49.125 "driver_specific": {} 00:07:49.125 } 00:07:49.125 ]' 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:49.125 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:49.385 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:49.385 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:49.385 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:49.385 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:49.385 01:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.769 01:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.769 01:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:50.769 01:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.769 01:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:50.769 01:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.675 01:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.675 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.935 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:53.194 01:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.133 ************************************ 00:07:54.133 START TEST filesystem_in_capsule_ext4 00:07:54.133 ************************************ 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:54.133 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:54.134 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:54.134 mke2fs 1.46.5 (30-Dec-2021) 00:07:54.134 Discarding device blocks: 0/522240 done 00:07:54.134 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:54.134 Filesystem UUID: 75ce8a56-000a-4b53-9496-e22e6b35d194 00:07:54.134 Superblock backups stored on blocks: 00:07:54.134 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:54.134 00:07:54.134 Allocating group tables: 0/64 done 00:07:54.134 Writing inode tables: 0/64 done 00:07:54.703 Creating journal (8192 blocks): done 00:07:54.703 Writing superblocks and filesystem accounting information: 0/64 done 00:07:54.703 00:07:54.703 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:54.703 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.703 01:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.703 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:54.703 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.703 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:54.703 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:54.704 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.704 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3772401 00:07:54.704 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.704 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.704 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.704 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.963 00:07:54.963 real 0m0.715s 00:07:54.963 user 0m0.028s 00:07:54.963 sys 0m0.046s 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:54.963 ************************************ 00:07:54.963 END TEST filesystem_in_capsule_ext4 00:07:54.963 ************************************ 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.963 ************************************ 00:07:54.963 START TEST filesystem_in_capsule_btrfs 00:07:54.963 ************************************ 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:54.963 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:54.963 btrfs-progs v6.6.2 00:07:54.963 See https://btrfs.readthedocs.io for more information. 00:07:54.964 00:07:54.964 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:54.964 NOTE: several default settings have changed in version 5.15, please make sure 00:07:54.964 this does not affect your deployments: 00:07:54.964 - DUP for metadata (-m dup) 00:07:54.964 - enabled no-holes (-O no-holes) 00:07:54.964 - enabled free-space-tree (-R free-space-tree) 00:07:54.964 00:07:54.964 Label: (null) 00:07:54.964 UUID: 57a0a1f2-ce87-48d9-bec9-387997e717ae 00:07:54.964 Node size: 16384 00:07:54.964 Sector size: 4096 00:07:54.964 Filesystem size: 510.00MiB 00:07:54.964 Block group profiles: 00:07:54.964 Data: single 8.00MiB 00:07:54.964 Metadata: DUP 32.00MiB 00:07:54.964 System: DUP 8.00MiB 00:07:54.964 SSD detected: yes 00:07:54.964 Zoned device: no 00:07:54.964 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:54.964 Runtime features: free-space-tree 00:07:54.964 Checksum: crc32c 00:07:54.964 Number of devices: 1 00:07:54.964 Devices: 00:07:54.964 ID SIZE PATH 00:07:54.964 1 510.00MiB /dev/nvme0n1p1 00:07:54.964 00:07:54.964 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:54.964 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.964 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.964 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3772401 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.223 00:07:55.223 real 0m0.230s 00:07:55.223 user 0m0.025s 00:07:55.223 sys 0m0.058s 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:55.223 ************************************ 00:07:55.223 END TEST filesystem_in_capsule_btrfs 00:07:55.223 ************************************ 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.223 ************************************ 00:07:55.223 START TEST filesystem_in_capsule_xfs 00:07:55.223 ************************************ 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:55.223 01:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:55.223 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:55.223 = sectsz=512 attr=2, projid32bit=1 00:07:55.223 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:55.223 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:55.223 data = bsize=4096 blocks=130560, imaxpct=25 00:07:55.223 = sunit=0 swidth=0 blks 00:07:55.223 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:55.223 log =internal log bsize=4096 blocks=16384, version=2 00:07:55.223 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:55.223 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:56.162 Discarding blocks...Done. 00:07:56.162 01:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:56.162 01:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3772401 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.077 00:07:58.077 real 0m2.740s 00:07:58.077 user 0m0.024s 00:07:58.077 sys 0m0.056s 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.077 ************************************ 00:07:58.077 END TEST filesystem_in_capsule_xfs 00:07:58.077 ************************************ 00:07:58.077 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3772401 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3772401 ']' 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3772401 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:58.368 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3772401 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3772401' 00:07:58.657 killing process with pid 3772401 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3772401 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3772401 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:58.657 00:07:58.657 real 0m9.929s 00:07:58.657 user 0m39.188s 00:07:58.657 sys 0m0.931s 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.657 ************************************ 00:07:58.657 END TEST nvmf_filesystem_in_capsule 00:07:58.657 ************************************ 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.657 01:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.657 rmmod nvme_tcp 00:07:58.657 rmmod nvme_fabrics 00:07:58.917 rmmod nvme_keyring 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.917 01:26:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.832 01:26:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.832 00:08:00.832 real 0m32.140s 00:08:00.832 user 1m26.469s 00:08:00.832 sys 0m8.252s 00:08:00.832 01:26:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.832 01:26:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.832 ************************************ 00:08:00.832 END TEST nvmf_filesystem 00:08:00.832 ************************************ 00:08:00.832 01:26:27 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.832 01:26:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:00.832 01:26:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.832 01:26:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.094 ************************************ 00:08:01.094 START TEST nvmf_target_discovery 00:08:01.094 ************************************ 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:01.094 * Looking for test storage... 00:08:01.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.094 01:26:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:09.234 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:09.234 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:09.234 Found net devices under 0000:31:00.0: cvl_0_0 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:09.234 Found net devices under 0000:31:00.1: cvl_0_1 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:09.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:08:09.234 00:08:09.234 --- 10.0.0.2 ping statistics --- 00:08:09.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.234 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:08:09.234 00:08:09.234 --- 10.0.0.1 ping statistics --- 00:08:09.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.234 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:09.234 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3779223 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3779223 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3779223 ']' 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.235 01:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.235 [2024-07-12 01:26:35.420434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:09.235 [2024-07-12 01:26:35.420497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.235 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.235 [2024-07-12 01:26:35.499676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.235 [2024-07-12 01:26:35.540111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.235 [2024-07-12 01:26:35.540159] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.235 [2024-07-12 01:26:35.540167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.235 [2024-07-12 01:26:35.540174] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.235 [2024-07-12 01:26:35.540180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.235 [2024-07-12 01:26:35.540337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.235 [2024-07-12 01:26:35.540452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.235 [2024-07-12 01:26:35.540611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.235 [2024-07-12 01:26:35.540612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 [2024-07-12 01:26:36.249930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 Null1 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 [2024-07-12 01:26:36.310256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 Null2 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.179 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 Null3 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 Null4 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.180 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:10.442 00:08:10.442 Discovery Log Number of Records 6, Generation counter 6 00:08:10.442 =====Discovery Log Entry 0====== 00:08:10.442 trtype: tcp 00:08:10.442 adrfam: ipv4 00:08:10.442 subtype: current discovery subsystem 00:08:10.442 treq: not required 00:08:10.442 portid: 0 00:08:10.442 trsvcid: 4420 00:08:10.442 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:10.442 traddr: 10.0.0.2 00:08:10.442 eflags: explicit discovery connections, duplicate discovery information 00:08:10.442 sectype: none 00:08:10.442 =====Discovery Log Entry 1====== 00:08:10.442 trtype: tcp 00:08:10.442 adrfam: ipv4 00:08:10.442 subtype: nvme subsystem 00:08:10.442 treq: not required 00:08:10.442 portid: 0 00:08:10.442 trsvcid: 4420 00:08:10.442 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:10.442 traddr: 10.0.0.2 00:08:10.442 eflags: none 00:08:10.442 sectype: none 00:08:10.442 =====Discovery Log Entry 2====== 00:08:10.442 trtype: tcp 00:08:10.442 adrfam: ipv4 00:08:10.442 subtype: nvme subsystem 00:08:10.442 treq: not required 00:08:10.442 portid: 0 00:08:10.442 trsvcid: 4420 00:08:10.442 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:10.442 traddr: 10.0.0.2 00:08:10.442 eflags: none 00:08:10.442 sectype: none 00:08:10.442 =====Discovery Log Entry 3====== 00:08:10.442 trtype: tcp 00:08:10.442 adrfam: ipv4 00:08:10.442 subtype: nvme subsystem 00:08:10.442 treq: not required 00:08:10.442 portid: 0 00:08:10.442 trsvcid: 4420 00:08:10.442 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:10.442 traddr: 10.0.0.2 00:08:10.442 eflags: none 00:08:10.442 sectype: none 00:08:10.442 =====Discovery Log Entry 4====== 00:08:10.442 trtype: tcp 00:08:10.442 adrfam: ipv4 00:08:10.442 subtype: nvme subsystem 00:08:10.442 treq: not required 00:08:10.442 portid: 0 00:08:10.442 trsvcid: 4420 00:08:10.442 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:10.442 traddr: 10.0.0.2 00:08:10.442 eflags: none 00:08:10.442 sectype: none 00:08:10.442 =====Discovery Log Entry 5====== 00:08:10.442 trtype: tcp 00:08:10.442 adrfam: ipv4 00:08:10.442 subtype: discovery subsystem referral 00:08:10.442 treq: not required 00:08:10.442 portid: 0 00:08:10.442 trsvcid: 4430 00:08:10.442 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:10.442 traddr: 10.0.0.2 00:08:10.442 eflags: none 00:08:10.442 sectype: none 00:08:10.442 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:10.442 Perform nvmf subsystem discovery via RPC 00:08:10.442 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:10.442 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.442 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.442 [ 00:08:10.442 { 00:08:10.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:10.442 "subtype": "Discovery", 00:08:10.442 "listen_addresses": [ 00:08:10.442 { 00:08:10.442 "trtype": "TCP", 00:08:10.442 "adrfam": "IPv4", 00:08:10.442 "traddr": "10.0.0.2", 00:08:10.442 "trsvcid": "4420" 00:08:10.442 } 00:08:10.442 ], 00:08:10.442 "allow_any_host": true, 00:08:10.442 "hosts": [] 00:08:10.442 }, 00:08:10.442 { 00:08:10.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.442 "subtype": "NVMe", 00:08:10.442 "listen_addresses": [ 00:08:10.442 { 00:08:10.442 "trtype": "TCP", 00:08:10.442 "adrfam": "IPv4", 00:08:10.442 "traddr": "10.0.0.2", 00:08:10.442 "trsvcid": "4420" 00:08:10.442 } 00:08:10.442 ], 00:08:10.442 "allow_any_host": true, 00:08:10.442 "hosts": [], 00:08:10.442 "serial_number": "SPDK00000000000001", 00:08:10.442 "model_number": "SPDK bdev Controller", 00:08:10.442 "max_namespaces": 32, 00:08:10.442 "min_cntlid": 1, 00:08:10.442 "max_cntlid": 65519, 00:08:10.442 "namespaces": [ 00:08:10.442 { 00:08:10.442 "nsid": 1, 00:08:10.442 "bdev_name": "Null1", 00:08:10.442 "name": "Null1", 00:08:10.442 "nguid": "4F0028F6103446C48CCEA53B85137E67", 00:08:10.442 "uuid": "4f0028f6-1034-46c4-8cce-a53b85137e67" 00:08:10.442 } 00:08:10.442 ] 00:08:10.442 }, 00:08:10.442 { 00:08:10.442 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:10.442 "subtype": "NVMe", 00:08:10.442 "listen_addresses": [ 00:08:10.442 { 00:08:10.442 "trtype": "TCP", 00:08:10.442 "adrfam": "IPv4", 00:08:10.442 "traddr": "10.0.0.2", 00:08:10.442 "trsvcid": "4420" 00:08:10.442 } 00:08:10.442 ], 00:08:10.442 "allow_any_host": true, 00:08:10.442 "hosts": [], 00:08:10.442 "serial_number": "SPDK00000000000002", 00:08:10.442 "model_number": "SPDK bdev Controller", 00:08:10.442 "max_namespaces": 32, 00:08:10.442 "min_cntlid": 1, 00:08:10.442 "max_cntlid": 65519, 00:08:10.442 "namespaces": [ 00:08:10.442 { 00:08:10.442 "nsid": 1, 00:08:10.442 "bdev_name": "Null2", 00:08:10.442 "name": "Null2", 00:08:10.442 "nguid": "65F5A1BBEAAB4F53B95A573582AE299B", 00:08:10.442 "uuid": "65f5a1bb-eaab-4f53-b95a-573582ae299b" 00:08:10.442 } 00:08:10.442 ] 00:08:10.442 }, 00:08:10.442 { 00:08:10.442 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:10.442 "subtype": "NVMe", 00:08:10.442 "listen_addresses": [ 00:08:10.442 { 00:08:10.442 "trtype": "TCP", 00:08:10.442 "adrfam": "IPv4", 00:08:10.442 "traddr": "10.0.0.2", 00:08:10.442 "trsvcid": "4420" 00:08:10.442 } 00:08:10.442 ], 00:08:10.442 "allow_any_host": true, 00:08:10.442 "hosts": [], 00:08:10.442 "serial_number": "SPDK00000000000003", 00:08:10.442 "model_number": "SPDK bdev Controller", 00:08:10.443 "max_namespaces": 32, 00:08:10.443 "min_cntlid": 1, 00:08:10.443 "max_cntlid": 65519, 00:08:10.443 "namespaces": [ 00:08:10.443 { 00:08:10.443 "nsid": 1, 00:08:10.443 "bdev_name": "Null3", 00:08:10.443 "name": "Null3", 00:08:10.443 "nguid": "02FD3086ABAB4B6FBC2F602A8848950E", 00:08:10.443 "uuid": "02fd3086-abab-4b6f-bc2f-602a8848950e" 00:08:10.443 } 00:08:10.443 ] 00:08:10.443 }, 00:08:10.443 { 00:08:10.443 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:10.443 "subtype": "NVMe", 00:08:10.443 "listen_addresses": [ 00:08:10.443 { 00:08:10.443 "trtype": "TCP", 00:08:10.443 "adrfam": "IPv4", 00:08:10.443 "traddr": "10.0.0.2", 00:08:10.443 "trsvcid": "4420" 00:08:10.443 } 00:08:10.443 ], 00:08:10.443 "allow_any_host": true, 00:08:10.443 "hosts": [], 00:08:10.443 "serial_number": "SPDK00000000000004", 00:08:10.443 "model_number": "SPDK bdev Controller", 00:08:10.443 "max_namespaces": 32, 00:08:10.443 "min_cntlid": 1, 00:08:10.443 "max_cntlid": 65519, 00:08:10.443 "namespaces": [ 00:08:10.443 { 00:08:10.443 "nsid": 1, 00:08:10.443 "bdev_name": "Null4", 00:08:10.443 "name": "Null4", 00:08:10.443 "nguid": "B22A0B288AB647BB9F2D21F27CE49CB4", 00:08:10.443 "uuid": "b22a0b28-8ab6-47bb-9f2d-21f27ce49cb4" 00:08:10.443 } 00:08:10.443 ] 00:08:10.443 } 00:08:10.443 ] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.443 rmmod nvme_tcp 00:08:10.443 rmmod nvme_fabrics 00:08:10.443 rmmod nvme_keyring 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3779223 ']' 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3779223 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3779223 ']' 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3779223 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.443 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3779223 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3779223' 00:08:10.704 killing process with pid 3779223 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3779223 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3779223 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.704 01:26:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.247 01:26:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.247 00:08:13.247 real 0m11.837s 00:08:13.247 user 0m8.132s 00:08:13.247 sys 0m6.264s 00:08:13.247 01:26:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.247 01:26:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.247 ************************************ 00:08:13.247 END TEST nvmf_target_discovery 00:08:13.247 ************************************ 00:08:13.247 01:26:39 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:13.247 01:26:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.247 01:26:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.247 01:26:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.247 ************************************ 00:08:13.247 START TEST nvmf_referrals 00:08:13.247 ************************************ 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:13.247 * Looking for test storage... 00:08:13.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.247 01:26:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:21.387 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:21.387 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:21.387 Found net devices under 0000:31:00.0: cvl_0_0 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:21.387 Found net devices under 0000:31:00.1: cvl_0_1 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.387 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:08:21.388 00:08:21.388 --- 10.0.0.2 ping statistics --- 00:08:21.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.388 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:08:21.388 00:08:21.388 --- 10.0.0.1 ping statistics --- 00:08:21.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.388 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3784292 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3784292 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3784292 ']' 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:21.388 01:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.388 [2024-07-12 01:26:47.562875] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:21.388 [2024-07-12 01:26:47.562943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.388 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.388 [2024-07-12 01:26:47.644182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.388 [2024-07-12 01:26:47.683526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.388 [2024-07-12 01:26:47.683573] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.388 [2024-07-12 01:26:47.683581] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.388 [2024-07-12 01:26:47.683588] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.388 [2024-07-12 01:26:47.683593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.388 [2024-07-12 01:26:47.683748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.388 [2024-07-12 01:26:47.683878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.388 [2024-07-12 01:26:47.684039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.388 [2024-07-12 01:26:47.684039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 [2024-07-12 01:26:48.393917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 [2024-07-12 01:26:48.410137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.330 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.331 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.591 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:22.592 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.592 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:22.592 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.851 01:26:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.851 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.112 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:23.371 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:23.371 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:23.371 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:23.371 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:23.371 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.371 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.631 rmmod nvme_tcp 00:08:23.631 rmmod nvme_fabrics 00:08:23.631 rmmod nvme_keyring 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3784292 ']' 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3784292 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3784292 ']' 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3784292 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:23.631 01:26:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3784292 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3784292' 00:08:23.891 killing process with pid 3784292 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3784292 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3784292 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.891 01:26:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.433 01:26:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.433 00:08:26.433 real 0m13.109s 00:08:26.433 user 0m12.985s 00:08:26.433 sys 0m6.687s 00:08:26.433 01:26:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.433 01:26:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.433 ************************************ 00:08:26.433 END TEST nvmf_referrals 00:08:26.433 ************************************ 00:08:26.433 01:26:52 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:26.433 01:26:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:26.433 01:26:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.433 01:26:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.433 ************************************ 00:08:26.433 START TEST nvmf_connect_disconnect 00:08:26.433 ************************************ 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:26.433 * Looking for test storage... 00:08:26.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:26.433 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.434 01:26:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:34.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:34.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.568 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:34.569 Found net devices under 0000:31:00.0: cvl_0_0 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:34.569 Found net devices under 0000:31:00.1: cvl_0_1 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:08:34.569 00:08:34.569 --- 10.0.0.2 ping statistics --- 00:08:34.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.569 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:08:34.569 00:08:34.569 --- 10.0.0.1 ping statistics --- 00:08:34.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.569 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3789740 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3789740 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3789740 ']' 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:34.569 01:27:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.569 [2024-07-12 01:27:00.648381] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:34.569 [2024-07-12 01:27:00.648456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.569 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.569 [2024-07-12 01:27:00.733221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.569 [2024-07-12 01:27:00.773723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.569 [2024-07-12 01:27:00.773768] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.569 [2024-07-12 01:27:00.773779] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.569 [2024-07-12 01:27:00.773786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.569 [2024-07-12 01:27:00.773792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.569 [2024-07-12 01:27:00.773937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.569 [2024-07-12 01:27:00.774053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.569 [2024-07-12 01:27:00.774211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.569 [2024-07-12 01:27:00.774212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.140 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [2024-07-12 01:27:01.475829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.141 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.141 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:35.141 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.141 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 [2024-07-12 01:27:01.535199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:35.401 01:27:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:37.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.619 rmmod nvme_tcp 00:12:24.619 rmmod nvme_fabrics 00:12:24.619 rmmod nvme_keyring 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3789740 ']' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3789740 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3789740 ']' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3789740 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3789740 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3789740' 00:12:24.619 killing process with pid 3789740 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3789740 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3789740 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.619 01:30:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.533 01:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.533 00:12:26.533 real 4m0.428s 00:12:26.533 user 15m14.017s 00:12:26.533 sys 0m19.854s 00:12:26.533 01:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.533 01:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.533 ************************************ 00:12:26.533 END TEST nvmf_connect_disconnect 00:12:26.533 ************************************ 00:12:26.533 01:30:52 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.533 01:30:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:26.533 01:30:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.533 01:30:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.533 ************************************ 00:12:26.533 START TEST nvmf_multitarget 00:12:26.533 ************************************ 00:12:26.533 01:30:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.793 * Looking for test storage... 00:12:26.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.793 01:30:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:35.014 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.014 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:35.015 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:35.015 Found net devices under 0000:31:00.0: cvl_0_0 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:35.015 Found net devices under 0000:31:00.1: cvl_0_1 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:35.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:12:35.015 00:12:35.015 --- 10.0.0.2 ping statistics --- 00:12:35.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.015 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:12:35.015 00:12:35.015 --- 10.0.0.1 ping statistics --- 00:12:35.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.015 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3841613 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3841613 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3841613 ']' 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:35.015 01:31:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.015 [2024-07-12 01:31:00.802678] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:35.015 [2024-07-12 01:31:00.802735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.015 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.015 [2024-07-12 01:31:00.879700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.015 [2024-07-12 01:31:00.919067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.015 [2024-07-12 01:31:00.919108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.015 [2024-07-12 01:31:00.919116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.015 [2024-07-12 01:31:00.919122] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.015 [2024-07-12 01:31:00.919128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.015 [2024-07-12 01:31:00.919303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.015 [2024-07-12 01:31:00.919443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.015 [2024-07-12 01:31:00.919605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.015 [2024-07-12 01:31:00.919606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.275 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:35.537 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:35.537 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:35.537 "nvmf_tgt_1" 00:12:35.537 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:35.798 "nvmf_tgt_2" 00:12:35.798 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.798 01:31:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:35.798 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:35.798 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:35.798 true 00:12:35.798 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:36.059 true 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.059 rmmod nvme_tcp 00:12:36.059 rmmod nvme_fabrics 00:12:36.059 rmmod nvme_keyring 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3841613 ']' 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3841613 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3841613 ']' 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3841613 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:36.059 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3841613 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3841613' 00:12:36.321 killing process with pid 3841613 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3841613 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3841613 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.321 01:31:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.867 01:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:38.867 00:12:38.867 real 0m11.802s 00:12:38.867 user 0m9.355s 00:12:38.867 sys 0m6.224s 00:12:38.867 01:31:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.867 01:31:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.867 ************************************ 00:12:38.867 END TEST nvmf_multitarget 00:12:38.867 ************************************ 00:12:38.867 01:31:04 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:38.867 01:31:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:38.867 01:31:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:38.867 01:31:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:38.867 ************************************ 00:12:38.867 START TEST nvmf_rpc 00:12:38.867 ************************************ 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:38.867 * Looking for test storage... 00:12:38.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.867 01:31:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.868 01:31:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.018 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:47.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:47.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:47.019 Found net devices under 0000:31:00.0: cvl_0_0 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:47.019 Found net devices under 0000:31:00.1: cvl_0_1 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:12:47.019 00:12:47.019 --- 10.0.0.2 ping statistics --- 00:12:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.019 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:12:47.019 00:12:47.019 --- 10.0.0.1 ping statistics --- 00:12:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.019 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3846657 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3846657 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3846657 ']' 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.019 01:31:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.019 [2024-07-12 01:31:12.994745] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:47.019 [2024-07-12 01:31:12.994793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.019 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.019 [2024-07-12 01:31:13.067903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.019 [2024-07-12 01:31:13.099504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.019 [2024-07-12 01:31:13.099543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.019 [2024-07-12 01:31:13.099552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.019 [2024-07-12 01:31:13.099558] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.019 [2024-07-12 01:31:13.099564] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.019 [2024-07-12 01:31:13.099703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.019 [2024-07-12 01:31:13.099818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.019 [2024-07-12 01:31:13.099971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.019 [2024-07-12 01:31:13.099972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:47.019 "tick_rate": 2400000000, 00:12:47.019 "poll_groups": [ 00:12:47.019 { 00:12:47.019 "name": "nvmf_tgt_poll_group_000", 00:12:47.019 "admin_qpairs": 0, 00:12:47.019 "io_qpairs": 0, 00:12:47.019 "current_admin_qpairs": 0, 00:12:47.019 "current_io_qpairs": 0, 00:12:47.019 "pending_bdev_io": 0, 00:12:47.019 "completed_nvme_io": 0, 00:12:47.019 "transports": [] 00:12:47.019 }, 00:12:47.019 { 00:12:47.019 "name": "nvmf_tgt_poll_group_001", 00:12:47.019 "admin_qpairs": 0, 00:12:47.019 "io_qpairs": 0, 00:12:47.019 "current_admin_qpairs": 0, 00:12:47.019 "current_io_qpairs": 0, 00:12:47.019 "pending_bdev_io": 0, 00:12:47.019 "completed_nvme_io": 0, 00:12:47.019 "transports": [] 00:12:47.019 }, 00:12:47.019 { 00:12:47.019 "name": "nvmf_tgt_poll_group_002", 00:12:47.019 "admin_qpairs": 0, 00:12:47.019 "io_qpairs": 0, 00:12:47.019 "current_admin_qpairs": 0, 00:12:47.019 "current_io_qpairs": 0, 00:12:47.019 "pending_bdev_io": 0, 00:12:47.019 "completed_nvme_io": 0, 00:12:47.019 "transports": [] 00:12:47.019 }, 00:12:47.019 { 00:12:47.019 "name": "nvmf_tgt_poll_group_003", 00:12:47.019 "admin_qpairs": 0, 00:12:47.019 "io_qpairs": 0, 00:12:47.019 "current_admin_qpairs": 0, 00:12:47.019 "current_io_qpairs": 0, 00:12:47.019 "pending_bdev_io": 0, 00:12:47.019 "completed_nvme_io": 0, 00:12:47.019 "transports": [] 00:12:47.019 } 00:12:47.019 ] 00:12:47.019 }' 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.019 [2024-07-12 01:31:13.357446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.019 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.280 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:47.280 "tick_rate": 2400000000, 00:12:47.280 "poll_groups": [ 00:12:47.280 { 00:12:47.280 "name": "nvmf_tgt_poll_group_000", 00:12:47.280 "admin_qpairs": 0, 00:12:47.280 "io_qpairs": 0, 00:12:47.281 "current_admin_qpairs": 0, 00:12:47.281 "current_io_qpairs": 0, 00:12:47.281 "pending_bdev_io": 0, 00:12:47.281 "completed_nvme_io": 0, 00:12:47.281 "transports": [ 00:12:47.281 { 00:12:47.281 "trtype": "TCP" 00:12:47.281 } 00:12:47.281 ] 00:12:47.281 }, 00:12:47.281 { 00:12:47.281 "name": "nvmf_tgt_poll_group_001", 00:12:47.281 "admin_qpairs": 0, 00:12:47.281 "io_qpairs": 0, 00:12:47.281 "current_admin_qpairs": 0, 00:12:47.281 "current_io_qpairs": 0, 00:12:47.281 "pending_bdev_io": 0, 00:12:47.281 "completed_nvme_io": 0, 00:12:47.281 "transports": [ 00:12:47.281 { 00:12:47.281 "trtype": "TCP" 00:12:47.281 } 00:12:47.281 ] 00:12:47.281 }, 00:12:47.281 { 00:12:47.281 "name": "nvmf_tgt_poll_group_002", 00:12:47.281 "admin_qpairs": 0, 00:12:47.281 "io_qpairs": 0, 00:12:47.281 "current_admin_qpairs": 0, 00:12:47.281 "current_io_qpairs": 0, 00:12:47.281 "pending_bdev_io": 0, 00:12:47.281 "completed_nvme_io": 0, 00:12:47.281 "transports": [ 00:12:47.281 { 00:12:47.281 "trtype": "TCP" 00:12:47.281 } 00:12:47.281 ] 00:12:47.281 }, 00:12:47.281 { 00:12:47.281 "name": "nvmf_tgt_poll_group_003", 00:12:47.281 "admin_qpairs": 0, 00:12:47.281 "io_qpairs": 0, 00:12:47.281 "current_admin_qpairs": 0, 00:12:47.281 "current_io_qpairs": 0, 00:12:47.281 "pending_bdev_io": 0, 00:12:47.281 "completed_nvme_io": 0, 00:12:47.281 "transports": [ 00:12:47.281 { 00:12:47.281 "trtype": "TCP" 00:12:47.281 } 00:12:47.281 ] 00:12:47.281 } 00:12:47.281 ] 00:12:47.281 }' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.281 Malloc1 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.281 [2024-07-12 01:31:13.549245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:47.281 [2024-07-12 01:31:13.576023] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:47.281 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:47.281 could not add new controller: failed to write to nvme-fabrics device 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.281 01:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.193 01:31:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.193 01:31:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:49.193 01:31:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.193 01:31:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:49.193 01:31:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.106 [2024-07-12 01:31:17.272411] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:51.106 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.106 could not add new controller: failed to write to nvme-fabrics device 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.106 01:31:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.490 01:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.490 01:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:52.490 01:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.490 01:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:52.490 01:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:54.402 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.663 [2024-07-12 01:31:20.881582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.663 01:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.048 01:31:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.048 01:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:56.048 01:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.048 01:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:56.048 01:31:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:58.592 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:58.592 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:58.592 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.592 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:58.592 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.592 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.593 [2024-07-12 01:31:24.543492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.593 01:31:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.973 01:31:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.973 01:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:59.973 01:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.973 01:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:59.973 01:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.885 [2024-07-12 01:31:28.212857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.885 01:31:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.792 01:31:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.792 01:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:03.792 01:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.792 01:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:03.792 01:31:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.704 [2024-07-12 01:31:31.882333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.704 01:31:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.088 01:31:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.088 01:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:07.089 01:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.089 01:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:07.089 01:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 [2024-07-12 01:31:35.595933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.634 01:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.017 01:31:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.017 01:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:11.018 01:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.018 01:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:11.018 01:31:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:12.930 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 [2024-07-12 01:31:39.261944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.931 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 [2024-07-12 01:31:39.322083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 [2024-07-12 01:31:39.382248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 [2024-07-12 01:31:39.438419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 [2024-07-12 01:31:39.498615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:13.453 "tick_rate": 2400000000, 00:13:13.453 "poll_groups": [ 00:13:13.453 { 00:13:13.453 "name": "nvmf_tgt_poll_group_000", 00:13:13.453 "admin_qpairs": 0, 00:13:13.453 "io_qpairs": 224, 00:13:13.453 "current_admin_qpairs": 0, 00:13:13.453 "current_io_qpairs": 0, 00:13:13.453 "pending_bdev_io": 0, 00:13:13.453 "completed_nvme_io": 224, 00:13:13.453 "transports": [ 00:13:13.453 { 00:13:13.453 "trtype": "TCP" 00:13:13.453 } 00:13:13.453 ] 00:13:13.453 }, 00:13:13.453 { 00:13:13.453 "name": "nvmf_tgt_poll_group_001", 00:13:13.453 "admin_qpairs": 1, 00:13:13.453 "io_qpairs": 223, 00:13:13.453 "current_admin_qpairs": 0, 00:13:13.453 "current_io_qpairs": 0, 00:13:13.453 "pending_bdev_io": 0, 00:13:13.453 "completed_nvme_io": 415, 00:13:13.453 "transports": [ 00:13:13.453 { 00:13:13.453 "trtype": "TCP" 00:13:13.453 } 00:13:13.453 ] 00:13:13.453 }, 00:13:13.453 { 00:13:13.453 "name": "nvmf_tgt_poll_group_002", 00:13:13.453 "admin_qpairs": 6, 00:13:13.453 "io_qpairs": 218, 00:13:13.453 "current_admin_qpairs": 0, 00:13:13.453 "current_io_qpairs": 0, 00:13:13.453 "pending_bdev_io": 0, 00:13:13.453 "completed_nvme_io": 273, 00:13:13.453 "transports": [ 00:13:13.453 { 00:13:13.453 "trtype": "TCP" 00:13:13.453 } 00:13:13.453 ] 00:13:13.453 }, 00:13:13.453 { 00:13:13.453 "name": "nvmf_tgt_poll_group_003", 00:13:13.453 "admin_qpairs": 0, 00:13:13.453 "io_qpairs": 224, 00:13:13.453 "current_admin_qpairs": 0, 00:13:13.453 "current_io_qpairs": 0, 00:13:13.453 "pending_bdev_io": 0, 00:13:13.453 "completed_nvme_io": 327, 00:13:13.453 "transports": [ 00:13:13.453 { 00:13:13.453 "trtype": "TCP" 00:13:13.453 } 00:13:13.453 ] 00:13:13.453 } 00:13:13.453 ] 00:13:13.453 }' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.453 rmmod nvme_tcp 00:13:13.453 rmmod nvme_fabrics 00:13:13.453 rmmod nvme_keyring 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3846657 ']' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3846657 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3846657 ']' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3846657 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3846657 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3846657' 00:13:13.453 killing process with pid 3846657 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3846657 00:13:13.453 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3846657 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.714 01:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.638 01:31:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.638 00:13:15.638 real 0m37.299s 00:13:15.638 user 1m49.399s 00:13:15.638 sys 0m7.511s 00:13:15.638 01:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.638 01:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.638 ************************************ 00:13:15.638 END TEST nvmf_rpc 00:13:15.638 ************************************ 00:13:15.900 01:31:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.900 01:31:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:15.900 01:31:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.900 01:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.900 ************************************ 00:13:15.900 START TEST nvmf_invalid 00:13:15.900 ************************************ 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.900 * Looking for test storage... 00:13:15.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.900 01:31:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.901 01:31:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.105 01:31:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.105 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:24.105 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:24.105 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:24.105 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:24.105 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:24.106 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:24.106 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:24.106 Found net devices under 0000:31:00.0: cvl_0_0 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:24.106 Found net devices under 0000:31:00.1: cvl_0_1 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:24.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:13:24.106 00:13:24.106 --- 10.0.0.2 ping statistics --- 00:13:24.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.106 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:13:24.106 00:13:24.106 --- 10.0.0.1 ping statistics --- 00:13:24.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.106 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3856706 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3856706 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3856706 ']' 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:24.106 01:31:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:24.106 [2024-07-12 01:31:50.409612] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:24.106 [2024-07-12 01:31:50.409680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.106 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.368 [2024-07-12 01:31:50.491009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.368 [2024-07-12 01:31:50.531911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.368 [2024-07-12 01:31:50.531958] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.368 [2024-07-12 01:31:50.531966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.368 [2024-07-12 01:31:50.531973] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.368 [2024-07-12 01:31:50.531979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.368 [2024-07-12 01:31:50.532125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.368 [2024-07-12 01:31:50.532324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.368 [2024-07-12 01:31:50.532158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.368 [2024-07-12 01:31:50.532324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:24.940 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22718 00:13:25.201 [2024-07-12 01:31:51.374249] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:25.201 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:25.201 { 00:13:25.201 "nqn": "nqn.2016-06.io.spdk:cnode22718", 00:13:25.201 "tgt_name": "foobar", 00:13:25.201 "method": "nvmf_create_subsystem", 00:13:25.201 "req_id": 1 00:13:25.201 } 00:13:25.201 Got JSON-RPC error response 00:13:25.201 response: 00:13:25.201 { 00:13:25.201 "code": -32603, 00:13:25.201 "message": "Unable to find target foobar" 00:13:25.201 }' 00:13:25.201 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:25.201 { 00:13:25.201 "nqn": "nqn.2016-06.io.spdk:cnode22718", 00:13:25.201 "tgt_name": "foobar", 00:13:25.201 "method": "nvmf_create_subsystem", 00:13:25.201 "req_id": 1 00:13:25.201 } 00:13:25.201 Got JSON-RPC error response 00:13:25.201 response: 00:13:25.201 { 00:13:25.201 "code": -32603, 00:13:25.201 "message": "Unable to find target foobar" 00:13:25.201 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:25.201 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:25.201 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32173 00:13:25.201 [2024-07-12 01:31:51.550851] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32173: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:25.462 { 00:13:25.462 "nqn": "nqn.2016-06.io.spdk:cnode32173", 00:13:25.462 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:25.462 "method": "nvmf_create_subsystem", 00:13:25.462 "req_id": 1 00:13:25.462 } 00:13:25.462 Got JSON-RPC error response 00:13:25.462 response: 00:13:25.462 { 00:13:25.462 "code": -32602, 00:13:25.462 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:25.462 }' 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:25.462 { 00:13:25.462 "nqn": "nqn.2016-06.io.spdk:cnode32173", 00:13:25.462 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:25.462 "method": "nvmf_create_subsystem", 00:13:25.462 "req_id": 1 00:13:25.462 } 00:13:25.462 Got JSON-RPC error response 00:13:25.462 response: 00:13:25.462 { 00:13:25.462 "code": -32602, 00:13:25.462 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:25.462 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31902 00:13:25.462 [2024-07-12 01:31:51.727394] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31902: invalid model number 'SPDK_Controller' 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:25.462 { 00:13:25.462 "nqn": "nqn.2016-06.io.spdk:cnode31902", 00:13:25.462 "model_number": "SPDK_Controller\u001f", 00:13:25.462 "method": "nvmf_create_subsystem", 00:13:25.462 "req_id": 1 00:13:25.462 } 00:13:25.462 Got JSON-RPC error response 00:13:25.462 response: 00:13:25.462 { 00:13:25.462 "code": -32602, 00:13:25.462 "message": "Invalid MN SPDK_Controller\u001f" 00:13:25.462 }' 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:25.462 { 00:13:25.462 "nqn": "nqn.2016-06.io.spdk:cnode31902", 00:13:25.462 "model_number": "SPDK_Controller\u001f", 00:13:25.462 "method": "nvmf_create_subsystem", 00:13:25.462 "req_id": 1 00:13:25.462 } 00:13:25.462 Got JSON-RPC error response 00:13:25.462 response: 00:13:25.462 { 00:13:25.462 "code": -32602, 00:13:25.462 "message": "Invalid MN SPDK_Controller\u001f" 00:13:25.462 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:25.462 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.463 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.724 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:13:25.725 01:31:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '.`(&c%Z /dev/null' 00:13:28.080 01:31:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.629 01:31:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.629 00:13:30.629 real 0m14.297s 00:13:30.629 user 0m19.412s 00:13:30.629 sys 0m6.988s 00:13:30.629 01:31:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.629 01:31:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.629 ************************************ 00:13:30.629 END TEST nvmf_invalid 00:13:30.629 ************************************ 00:13:30.629 01:31:56 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:30.629 01:31:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:30.629 01:31:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.629 01:31:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.629 ************************************ 00:13:30.629 START TEST nvmf_abort 00:13:30.629 ************************************ 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:30.629 * Looking for test storage... 00:13:30.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.629 01:31:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:38.774 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:38.774 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:38.774 Found net devices under 0000:31:00.0: cvl_0_0 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:38.774 Found net devices under 0000:31:00.1: cvl_0_1 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:13:38.774 00:13:38.774 --- 10.0.0.2 ping statistics --- 00:13:38.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.774 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:13:38.774 00:13:38.774 --- 10.0.0.1 ping statistics --- 00:13:38.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.774 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3862388 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3862388 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3862388 ']' 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.774 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.775 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.775 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.775 01:32:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:38.775 [2024-07-12 01:32:04.912104] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:38.775 [2024-07-12 01:32:04.912170] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.775 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.775 [2024-07-12 01:32:05.007414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.775 [2024-07-12 01:32:05.054062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.775 [2024-07-12 01:32:05.054116] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.775 [2024-07-12 01:32:05.054124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.775 [2024-07-12 01:32:05.054131] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.775 [2024-07-12 01:32:05.054137] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.775 [2024-07-12 01:32:05.054294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.775 [2024-07-12 01:32:05.054460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.775 [2024-07-12 01:32:05.054460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.344 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.344 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:39.344 01:32:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.344 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.344 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 [2024-07-12 01:32:05.730058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 Malloc0 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 Delay0 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 [2024-07-12 01:32:05.815333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.604 01:32:05 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:39.604 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.604 [2024-07-12 01:32:05.923869] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:42.150 Initializing NVMe Controllers 00:13:42.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:42.150 controller IO queue size 128 less than required 00:13:42.150 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:42.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:42.150 Initialization complete. Launching workers. 00:13:42.150 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33903 00:13:42.150 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33964, failed to submit 62 00:13:42.150 success 33907, unsuccess 57, failed 0 00:13:42.150 01:32:07 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:42.150 01:32:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.150 rmmod nvme_tcp 00:13:42.150 rmmod nvme_fabrics 00:13:42.150 rmmod nvme_keyring 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3862388 ']' 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3862388 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3862388 ']' 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3862388 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:42.150 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3862388 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3862388' 00:13:42.151 killing process with pid 3862388 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3862388 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3862388 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.151 01:32:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.065 01:32:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:44.065 00:13:44.065 real 0m13.902s 00:13:44.065 user 0m13.749s 00:13:44.065 sys 0m6.946s 00:13:44.065 01:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.065 01:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.065 ************************************ 00:13:44.065 END TEST nvmf_abort 00:13:44.065 ************************************ 00:13:44.065 01:32:10 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:44.065 01:32:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:44.065 01:32:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.065 01:32:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.065 ************************************ 00:13:44.065 START TEST nvmf_ns_hotplug_stress 00:13:44.065 ************************************ 00:13:44.065 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:44.326 * Looking for test storage... 00:13:44.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.326 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.327 01:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.473 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:52.474 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:52.474 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:52.474 Found net devices under 0000:31:00.0: cvl_0_0 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:52.474 Found net devices under 0000:31:00.1: cvl_0_1 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:13:52.474 00:13:52.474 --- 10.0.0.2 ping statistics --- 00:13:52.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.474 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:13:52.474 00:13:52.474 --- 10.0.0.1 ping statistics --- 00:13:52.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.474 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.474 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3867760 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3867760 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3867760 ']' 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.737 01:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.737 [2024-07-12 01:32:18.914429] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:52.737 [2024-07-12 01:32:18.914493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.737 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.737 [2024-07-12 01:32:19.010191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.737 [2024-07-12 01:32:19.056835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.737 [2024-07-12 01:32:19.056892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.737 [2024-07-12 01:32:19.056901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.737 [2024-07-12 01:32:19.056908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.737 [2024-07-12 01:32:19.056919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.737 [2024-07-12 01:32:19.057053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.737 [2024-07-12 01:32:19.057215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.737 [2024-07-12 01:32:19.057215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.681 [2024-07-12 01:32:19.873132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.681 01:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:53.942 01:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.942 [2024-07-12 01:32:20.214595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.942 01:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.203 01:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:54.464 Malloc0 00:13:54.464 01:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:54.464 Delay0 00:13:54.464 01:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.725 01:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:54.986 NULL1 00:13:54.986 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:54.986 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:54.987 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3868133 00:13:54.987 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:54.987 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.987 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.246 Read completed with error (sct=0, sc=11) 00:13:55.246 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:55.247 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:55.507 [2024-07-12 01:32:21.738375] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:13:55.507 true 00:13:55.507 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:55.507 01:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.449 01:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.449 01:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:56.449 01:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:56.710 true 00:13:56.710 01:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:56.710 01:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.970 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.970 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:56.970 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:57.230 true 00:13:57.230 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:57.231 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.231 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.491 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:57.491 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:57.752 true 00:13:57.752 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:57.752 01:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.752 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.014 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:58.014 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:58.014 true 00:13:58.275 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:58.275 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.275 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.536 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:58.536 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:58.536 true 00:13:58.536 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:58.536 01:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.478 01:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.740 01:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:59.740 01:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:59.740 true 00:13:59.740 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:13:59.740 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.001 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.262 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:00.263 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:00.263 true 00:14:00.263 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:00.263 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.523 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.785 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:00.785 01:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:00.785 true 00:14:00.785 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:00.785 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.047 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.308 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:01.308 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:01.308 true 00:14:01.308 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:01.308 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.569 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.569 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:01.569 01:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:01.830 true 00:14:01.830 01:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:01.830 01:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.837 01:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.837 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:02.837 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:03.131 true 00:14:03.131 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:03.131 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.131 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.392 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:03.392 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:03.653 true 00:14:03.653 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:03.653 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.653 01:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.914 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:03.914 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:03.914 true 00:14:04.176 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:04.176 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.176 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.437 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:04.437 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:04.437 true 00:14:04.696 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:04.696 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.696 01:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.956 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:04.956 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:04.956 true 00:14:04.956 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:04.956 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.217 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.478 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:05.478 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:05.478 true 00:14:05.478 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:05.478 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.739 01:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.739 01:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:05.739 01:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:06.000 true 00:14:06.000 01:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:06.000 01:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.941 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.200 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:07.200 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:07.200 true 00:14:07.200 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:07.200 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.461 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.721 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:07.721 01:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:07.721 true 00:14:07.721 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:07.721 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.982 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.242 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:08.242 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:08.242 true 00:14:08.242 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:08.242 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.501 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.501 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:08.501 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:08.761 true 00:14:08.761 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:08.761 01:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.021 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.021 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:09.021 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:09.281 true 00:14:09.281 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:09.281 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.541 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.542 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:09.542 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:09.802 true 00:14:09.802 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:09.802 01:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.802 01:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.061 01:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:10.061 01:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:10.321 true 00:14:10.321 01:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:10.321 01:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.262 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.262 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:11.262 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:11.523 true 00:14:11.523 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:11.523 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.523 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.784 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:11.784 01:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:11.784 true 00:14:12.044 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:12.044 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.044 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.304 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:12.304 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:12.304 true 00:14:12.304 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:12.304 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.565 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.826 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:12.826 01:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:12.826 true 00:14:12.826 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:12.826 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.087 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.347 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:13.347 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:13.347 true 00:14:13.347 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:13.347 01:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.291 01:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.291 01:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:14.291 01:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:14.569 true 00:14:14.569 01:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:14.569 01:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.831 01:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.831 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:14.831 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:15.091 true 00:14:15.091 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:15.091 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.352 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.352 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:15.352 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:15.613 true 00:14:15.613 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:15.613 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.613 01:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.875 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:15.875 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:16.136 true 00:14:16.136 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:16.136 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.136 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.397 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:16.397 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:16.397 true 00:14:16.659 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:16.659 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.659 01:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.919 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:16.919 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:16.919 true 00:14:16.919 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:16.919 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.180 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.441 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:17.441 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:17.441 true 00:14:17.441 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:17.441 01:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.385 01:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.646 01:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:18.646 01:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:18.908 true 00:14:18.908 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:18.908 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.908 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.169 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:19.169 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:19.169 true 00:14:19.169 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:19.169 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.430 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.691 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:19.691 01:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:19.691 true 00:14:19.691 01:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:19.691 01:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.635 01:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.897 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:20.897 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:20.897 true 00:14:21.158 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:21.158 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.158 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.420 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:21.420 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:21.420 true 00:14:21.680 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:21.680 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.680 01:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.941 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:21.941 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:21.941 true 00:14:21.941 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:21.941 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.202 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.463 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:22.463 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:22.463 true 00:14:22.463 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:22.463 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.724 01:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.724 01:32:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:22.724 01:32:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:22.986 true 00:14:22.986 01:32:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:22.986 01:32:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.929 01:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.929 01:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:23.929 01:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:24.190 true 00:14:24.190 01:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:24.190 01:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.133 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.133 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:25.133 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:25.395 Initializing NVMe Controllers 00:14:25.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.395 Controller IO queue size 128, less than required. 00:14:25.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:25.395 Controller IO queue size 128, less than required. 00:14:25.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:25.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:25.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:25.395 Initialization complete. Launching workers. 00:14:25.395 ======================================================== 00:14:25.395 Latency(us) 00:14:25.395 Device Information : IOPS MiB/s Average min max 00:14:25.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 763.83 0.37 57271.73 1836.41 1140768.57 00:14:25.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10186.53 4.97 12566.21 1629.34 405531.89 00:14:25.395 ======================================================== 00:14:25.395 Total : 10950.36 5.35 15684.60 1629.34 1140768.57 00:14:25.395 00:14:25.395 true 00:14:25.395 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3868133 00:14:25.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3868133) - No such process 00:14:25.395 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3868133 00:14:25.395 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.656 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.656 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:25.656 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:25.656 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:25.656 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:25.656 01:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:25.918 null0 00:14:25.918 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:25.918 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:25.918 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:25.918 null1 00:14:25.918 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:25.918 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:25.918 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:26.179 null2 00:14:26.179 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.179 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.180 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:26.441 null3 00:14:26.441 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.441 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.441 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:26.441 null4 00:14:26.441 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.441 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.441 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:26.703 null5 00:14:26.703 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.703 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.703 01:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:26.964 null6 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:26.964 null7 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3874632 3874633 3874635 3874637 3874639 3874641 3874643 3874644 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.964 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.225 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.487 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.748 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.748 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.748 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.749 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.749 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.749 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.011 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.272 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.534 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.795 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.056 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.057 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.317 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.578 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.839 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.099 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.360 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.622 rmmod nvme_tcp 00:14:30.622 rmmod nvme_fabrics 00:14:30.622 rmmod nvme_keyring 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3867760 ']' 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3867760 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3867760 ']' 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3867760 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3867760 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3867760' 00:14:30.622 killing process with pid 3867760 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3867760 00:14:30.622 01:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3867760 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.882 01:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.882 01:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.882 00:14:32.882 real 0m48.701s 00:14:32.882 user 3m9.700s 00:14:32.882 sys 0m15.849s 00:14:32.882 01:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.882 01:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.882 ************************************ 00:14:32.882 END TEST nvmf_ns_hotplug_stress 00:14:32.882 ************************************ 00:14:32.882 01:32:59 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:32.882 01:32:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:32.882 01:32:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.882 01:32:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.882 ************************************ 00:14:32.882 START TEST nvmf_connect_stress 00:14:32.882 ************************************ 00:14:32.882 01:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:33.143 * Looking for test storage... 00:14:33.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.143 01:32:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:41.288 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:41.288 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:41.288 Found net devices under 0000:31:00.0: cvl_0_0 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:41.288 Found net devices under 0000:31:00.1: cvl_0_1 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.288 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:14:41.289 00:14:41.289 --- 10.0.0.2 ping statistics --- 00:14:41.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.289 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:14:41.289 00:14:41.289 --- 10.0.0.1 ping statistics --- 00:14:41.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.289 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3880255 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3880255 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3880255 ']' 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:41.289 01:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 [2024-07-12 01:33:07.673695] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:41.549 [2024-07-12 01:33:07.673766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.549 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.549 [2024-07-12 01:33:07.770251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.549 [2024-07-12 01:33:07.817281] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.549 [2024-07-12 01:33:07.817329] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.549 [2024-07-12 01:33:07.817338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.549 [2024-07-12 01:33:07.817344] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.549 [2024-07-12 01:33:07.817350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.549 [2024-07-12 01:33:07.817519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.549 [2024-07-12 01:33:07.817659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.549 [2024-07-12 01:33:07.817661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.119 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.119 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:42.119 01:33:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.119 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.119 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.379 [2024-07-12 01:33:08.489324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.379 [2024-07-12 01:33:08.513697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.379 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.379 NULL1 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3880677 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.380 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.639 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.639 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:42.639 01:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.639 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.639 01:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.209 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.209 01:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:43.209 01:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.209 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.209 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.468 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.468 01:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:43.468 01:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.468 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.468 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.728 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.728 01:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:43.728 01:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.728 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.728 01:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.988 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.988 01:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:43.988 01:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.988 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.988 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.248 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.248 01:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:44.248 01:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.248 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.248 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.821 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.821 01:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:44.821 01:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.821 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.821 01:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.080 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.080 01:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:45.080 01:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.080 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.080 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.339 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.339 01:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:45.339 01:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.339 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.339 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.599 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.599 01:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:45.599 01:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.599 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.599 01:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.170 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.170 01:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:46.170 01:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.170 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.170 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.430 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.430 01:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:46.430 01:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.430 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.430 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.690 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.690 01:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:46.690 01:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.690 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.690 01:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.949 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.949 01:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:46.949 01:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.949 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.949 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.208 01:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:47.208 01:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.208 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.208 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.779 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.779 01:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:47.779 01:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.779 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.779 01:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.040 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.040 01:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:48.040 01:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.040 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.040 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.301 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.301 01:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:48.301 01:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.301 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.301 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.562 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.562 01:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:48.562 01:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.562 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.562 01:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.822 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.822 01:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:48.822 01:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.822 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.822 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.395 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.395 01:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:49.395 01:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.395 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.395 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.656 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.656 01:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:49.656 01:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.656 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.656 01:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.916 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.916 01:33:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:49.916 01:33:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.916 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.916 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.177 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.177 01:33:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:50.177 01:33:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.177 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.177 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.748 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.748 01:33:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:50.748 01:33:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.748 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.748 01:33:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.009 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.009 01:33:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:51.009 01:33:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.009 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.009 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.269 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.269 01:33:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:51.269 01:33:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.269 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.269 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.530 01:33:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:51.530 01:33:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.530 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.530 01:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.790 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.790 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:51.790 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.790 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.790 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.360 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.360 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:52.360 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.360 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.360 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.360 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3880677 00:14:52.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3880677) - No such process 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3880677 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.621 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.622 rmmod nvme_tcp 00:14:52.622 rmmod nvme_fabrics 00:14:52.622 rmmod nvme_keyring 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3880255 ']' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3880255 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3880255 ']' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3880255 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3880255 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3880255' 00:14:52.622 killing process with pid 3880255 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3880255 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3880255 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.622 01:33:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.167 01:33:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.167 00:14:55.167 real 0m21.850s 00:14:55.167 user 0m42.343s 00:14:55.167 sys 0m9.418s 00:14:55.167 01:33:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:55.167 01:33:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.167 ************************************ 00:14:55.167 END TEST nvmf_connect_stress 00:14:55.167 ************************************ 00:14:55.167 01:33:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:55.167 01:33:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:55.167 01:33:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:55.167 01:33:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.167 ************************************ 00:14:55.167 START TEST nvmf_fused_ordering 00:14:55.167 ************************************ 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:55.167 * Looking for test storage... 00:14:55.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.167 01:33:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:03.313 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:03.313 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.313 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:03.314 Found net devices under 0000:31:00.0: cvl_0_0 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:03.314 Found net devices under 0000:31:00.1: cvl_0_1 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:15:03.314 00:15:03.314 --- 10.0.0.2 ping statistics --- 00:15:03.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.314 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:15:03.314 00:15:03.314 --- 10.0.0.1 ping statistics --- 00:15:03.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.314 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3887758 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3887758 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3887758 ']' 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:03.314 01:33:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:03.314 [2024-07-12 01:33:29.518660] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:03.314 [2024-07-12 01:33:29.518723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.314 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.314 [2024-07-12 01:33:29.612692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.314 [2024-07-12 01:33:29.658091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.314 [2024-07-12 01:33:29.658145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.314 [2024-07-12 01:33:29.658153] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.314 [2024-07-12 01:33:29.658160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.314 [2024-07-12 01:33:29.658166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.314 [2024-07-12 01:33:29.658189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 [2024-07-12 01:33:30.370278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 [2024-07-12 01:33:30.394484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 NULL1 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.259 01:33:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:04.259 [2024-07-12 01:33:30.461639] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:04.259 [2024-07-12 01:33:30.461685] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887803 ] 00:15:04.259 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.520 Attached to nqn.2016-06.io.spdk:cnode1 00:15:04.520 Namespace ID: 1 size: 1GB 00:15:04.520 fused_ordering(0) 00:15:04.520 fused_ordering(1) 00:15:04.520 fused_ordering(2) 00:15:04.520 fused_ordering(3) 00:15:04.520 fused_ordering(4) 00:15:04.520 fused_ordering(5) 00:15:04.520 fused_ordering(6) 00:15:04.520 fused_ordering(7) 00:15:04.520 fused_ordering(8) 00:15:04.520 fused_ordering(9) 00:15:04.520 fused_ordering(10) 00:15:04.520 fused_ordering(11) 00:15:04.520 fused_ordering(12) 00:15:04.520 fused_ordering(13) 00:15:04.520 fused_ordering(14) 00:15:04.520 fused_ordering(15) 00:15:04.520 fused_ordering(16) 00:15:04.520 fused_ordering(17) 00:15:04.520 fused_ordering(18) 00:15:04.520 fused_ordering(19) 00:15:04.520 fused_ordering(20) 00:15:04.520 fused_ordering(21) 00:15:04.520 fused_ordering(22) 00:15:04.520 fused_ordering(23) 00:15:04.520 fused_ordering(24) 00:15:04.520 fused_ordering(25) 00:15:04.520 fused_ordering(26) 00:15:04.520 fused_ordering(27) 00:15:04.520 fused_ordering(28) 00:15:04.520 fused_ordering(29) 00:15:04.520 fused_ordering(30) 00:15:04.520 fused_ordering(31) 00:15:04.520 fused_ordering(32) 00:15:04.520 fused_ordering(33) 00:15:04.520 fused_ordering(34) 00:15:04.520 fused_ordering(35) 00:15:04.520 fused_ordering(36) 00:15:04.520 fused_ordering(37) 00:15:04.520 fused_ordering(38) 00:15:04.520 fused_ordering(39) 00:15:04.520 fused_ordering(40) 00:15:04.520 fused_ordering(41) 00:15:04.520 fused_ordering(42) 00:15:04.520 fused_ordering(43) 00:15:04.520 fused_ordering(44) 00:15:04.520 fused_ordering(45) 00:15:04.520 fused_ordering(46) 00:15:04.520 fused_ordering(47) 00:15:04.520 fused_ordering(48) 00:15:04.520 fused_ordering(49) 00:15:04.520 fused_ordering(50) 00:15:04.520 fused_ordering(51) 00:15:04.520 fused_ordering(52) 00:15:04.520 fused_ordering(53) 00:15:04.520 fused_ordering(54) 00:15:04.520 fused_ordering(55) 00:15:04.520 fused_ordering(56) 00:15:04.520 fused_ordering(57) 00:15:04.520 fused_ordering(58) 00:15:04.520 fused_ordering(59) 00:15:04.520 fused_ordering(60) 00:15:04.520 fused_ordering(61) 00:15:04.520 fused_ordering(62) 00:15:04.520 fused_ordering(63) 00:15:04.520 fused_ordering(64) 00:15:04.520 fused_ordering(65) 00:15:04.520 fused_ordering(66) 00:15:04.520 fused_ordering(67) 00:15:04.520 fused_ordering(68) 00:15:04.520 fused_ordering(69) 00:15:04.520 fused_ordering(70) 00:15:04.520 fused_ordering(71) 00:15:04.520 fused_ordering(72) 00:15:04.520 fused_ordering(73) 00:15:04.520 fused_ordering(74) 00:15:04.520 fused_ordering(75) 00:15:04.520 fused_ordering(76) 00:15:04.520 fused_ordering(77) 00:15:04.520 fused_ordering(78) 00:15:04.520 fused_ordering(79) 00:15:04.520 fused_ordering(80) 00:15:04.520 fused_ordering(81) 00:15:04.520 fused_ordering(82) 00:15:04.520 fused_ordering(83) 00:15:04.520 fused_ordering(84) 00:15:04.520 fused_ordering(85) 00:15:04.520 fused_ordering(86) 00:15:04.520 fused_ordering(87) 00:15:04.520 fused_ordering(88) 00:15:04.520 fused_ordering(89) 00:15:04.520 fused_ordering(90) 00:15:04.520 fused_ordering(91) 00:15:04.520 fused_ordering(92) 00:15:04.520 fused_ordering(93) 00:15:04.520 fused_ordering(94) 00:15:04.520 fused_ordering(95) 00:15:04.520 fused_ordering(96) 00:15:04.520 fused_ordering(97) 00:15:04.520 fused_ordering(98) 00:15:04.520 fused_ordering(99) 00:15:04.520 fused_ordering(100) 00:15:04.520 fused_ordering(101) 00:15:04.520 fused_ordering(102) 00:15:04.520 fused_ordering(103) 00:15:04.520 fused_ordering(104) 00:15:04.520 fused_ordering(105) 00:15:04.520 fused_ordering(106) 00:15:04.520 fused_ordering(107) 00:15:04.520 fused_ordering(108) 00:15:04.520 fused_ordering(109) 00:15:04.520 fused_ordering(110) 00:15:04.520 fused_ordering(111) 00:15:04.520 fused_ordering(112) 00:15:04.520 fused_ordering(113) 00:15:04.520 fused_ordering(114) 00:15:04.520 fused_ordering(115) 00:15:04.520 fused_ordering(116) 00:15:04.520 fused_ordering(117) 00:15:04.520 fused_ordering(118) 00:15:04.520 fused_ordering(119) 00:15:04.520 fused_ordering(120) 00:15:04.520 fused_ordering(121) 00:15:04.520 fused_ordering(122) 00:15:04.520 fused_ordering(123) 00:15:04.520 fused_ordering(124) 00:15:04.520 fused_ordering(125) 00:15:04.520 fused_ordering(126) 00:15:04.520 fused_ordering(127) 00:15:04.520 fused_ordering(128) 00:15:04.520 fused_ordering(129) 00:15:04.520 fused_ordering(130) 00:15:04.520 fused_ordering(131) 00:15:04.520 fused_ordering(132) 00:15:04.520 fused_ordering(133) 00:15:04.520 fused_ordering(134) 00:15:04.520 fused_ordering(135) 00:15:04.520 fused_ordering(136) 00:15:04.520 fused_ordering(137) 00:15:04.520 fused_ordering(138) 00:15:04.520 fused_ordering(139) 00:15:04.520 fused_ordering(140) 00:15:04.520 fused_ordering(141) 00:15:04.520 fused_ordering(142) 00:15:04.520 fused_ordering(143) 00:15:04.520 fused_ordering(144) 00:15:04.520 fused_ordering(145) 00:15:04.520 fused_ordering(146) 00:15:04.520 fused_ordering(147) 00:15:04.520 fused_ordering(148) 00:15:04.520 fused_ordering(149) 00:15:04.520 fused_ordering(150) 00:15:04.520 fused_ordering(151) 00:15:04.520 fused_ordering(152) 00:15:04.520 fused_ordering(153) 00:15:04.520 fused_ordering(154) 00:15:04.520 fused_ordering(155) 00:15:04.520 fused_ordering(156) 00:15:04.520 fused_ordering(157) 00:15:04.520 fused_ordering(158) 00:15:04.520 fused_ordering(159) 00:15:04.520 fused_ordering(160) 00:15:04.520 fused_ordering(161) 00:15:04.520 fused_ordering(162) 00:15:04.520 fused_ordering(163) 00:15:04.520 fused_ordering(164) 00:15:04.520 fused_ordering(165) 00:15:04.520 fused_ordering(166) 00:15:04.520 fused_ordering(167) 00:15:04.520 fused_ordering(168) 00:15:04.520 fused_ordering(169) 00:15:04.520 fused_ordering(170) 00:15:04.520 fused_ordering(171) 00:15:04.520 fused_ordering(172) 00:15:04.520 fused_ordering(173) 00:15:04.520 fused_ordering(174) 00:15:04.520 fused_ordering(175) 00:15:04.520 fused_ordering(176) 00:15:04.520 fused_ordering(177) 00:15:04.520 fused_ordering(178) 00:15:04.520 fused_ordering(179) 00:15:04.520 fused_ordering(180) 00:15:04.520 fused_ordering(181) 00:15:04.520 fused_ordering(182) 00:15:04.520 fused_ordering(183) 00:15:04.520 fused_ordering(184) 00:15:04.520 fused_ordering(185) 00:15:04.520 fused_ordering(186) 00:15:04.520 fused_ordering(187) 00:15:04.520 fused_ordering(188) 00:15:04.520 fused_ordering(189) 00:15:04.520 fused_ordering(190) 00:15:04.520 fused_ordering(191) 00:15:04.520 fused_ordering(192) 00:15:04.520 fused_ordering(193) 00:15:04.520 fused_ordering(194) 00:15:04.520 fused_ordering(195) 00:15:04.520 fused_ordering(196) 00:15:04.520 fused_ordering(197) 00:15:04.520 fused_ordering(198) 00:15:04.520 fused_ordering(199) 00:15:04.520 fused_ordering(200) 00:15:04.520 fused_ordering(201) 00:15:04.520 fused_ordering(202) 00:15:04.520 fused_ordering(203) 00:15:04.520 fused_ordering(204) 00:15:04.520 fused_ordering(205) 00:15:05.092 fused_ordering(206) 00:15:05.092 fused_ordering(207) 00:15:05.092 fused_ordering(208) 00:15:05.092 fused_ordering(209) 00:15:05.092 fused_ordering(210) 00:15:05.092 fused_ordering(211) 00:15:05.092 fused_ordering(212) 00:15:05.092 fused_ordering(213) 00:15:05.092 fused_ordering(214) 00:15:05.092 fused_ordering(215) 00:15:05.092 fused_ordering(216) 00:15:05.092 fused_ordering(217) 00:15:05.092 fused_ordering(218) 00:15:05.092 fused_ordering(219) 00:15:05.092 fused_ordering(220) 00:15:05.092 fused_ordering(221) 00:15:05.092 fused_ordering(222) 00:15:05.092 fused_ordering(223) 00:15:05.092 fused_ordering(224) 00:15:05.092 fused_ordering(225) 00:15:05.092 fused_ordering(226) 00:15:05.092 fused_ordering(227) 00:15:05.092 fused_ordering(228) 00:15:05.092 fused_ordering(229) 00:15:05.092 fused_ordering(230) 00:15:05.092 fused_ordering(231) 00:15:05.092 fused_ordering(232) 00:15:05.092 fused_ordering(233) 00:15:05.092 fused_ordering(234) 00:15:05.092 fused_ordering(235) 00:15:05.092 fused_ordering(236) 00:15:05.092 fused_ordering(237) 00:15:05.092 fused_ordering(238) 00:15:05.092 fused_ordering(239) 00:15:05.092 fused_ordering(240) 00:15:05.092 fused_ordering(241) 00:15:05.092 fused_ordering(242) 00:15:05.092 fused_ordering(243) 00:15:05.092 fused_ordering(244) 00:15:05.092 fused_ordering(245) 00:15:05.092 fused_ordering(246) 00:15:05.092 fused_ordering(247) 00:15:05.092 fused_ordering(248) 00:15:05.092 fused_ordering(249) 00:15:05.092 fused_ordering(250) 00:15:05.092 fused_ordering(251) 00:15:05.092 fused_ordering(252) 00:15:05.092 fused_ordering(253) 00:15:05.092 fused_ordering(254) 00:15:05.092 fused_ordering(255) 00:15:05.092 fused_ordering(256) 00:15:05.092 fused_ordering(257) 00:15:05.092 fused_ordering(258) 00:15:05.092 fused_ordering(259) 00:15:05.092 fused_ordering(260) 00:15:05.092 fused_ordering(261) 00:15:05.092 fused_ordering(262) 00:15:05.092 fused_ordering(263) 00:15:05.092 fused_ordering(264) 00:15:05.092 fused_ordering(265) 00:15:05.092 fused_ordering(266) 00:15:05.092 fused_ordering(267) 00:15:05.092 fused_ordering(268) 00:15:05.092 fused_ordering(269) 00:15:05.092 fused_ordering(270) 00:15:05.092 fused_ordering(271) 00:15:05.092 fused_ordering(272) 00:15:05.092 fused_ordering(273) 00:15:05.092 fused_ordering(274) 00:15:05.092 fused_ordering(275) 00:15:05.092 fused_ordering(276) 00:15:05.092 fused_ordering(277) 00:15:05.092 fused_ordering(278) 00:15:05.092 fused_ordering(279) 00:15:05.092 fused_ordering(280) 00:15:05.092 fused_ordering(281) 00:15:05.092 fused_ordering(282) 00:15:05.092 fused_ordering(283) 00:15:05.092 fused_ordering(284) 00:15:05.092 fused_ordering(285) 00:15:05.092 fused_ordering(286) 00:15:05.092 fused_ordering(287) 00:15:05.092 fused_ordering(288) 00:15:05.092 fused_ordering(289) 00:15:05.092 fused_ordering(290) 00:15:05.092 fused_ordering(291) 00:15:05.092 fused_ordering(292) 00:15:05.092 fused_ordering(293) 00:15:05.092 fused_ordering(294) 00:15:05.092 fused_ordering(295) 00:15:05.092 fused_ordering(296) 00:15:05.092 fused_ordering(297) 00:15:05.092 fused_ordering(298) 00:15:05.092 fused_ordering(299) 00:15:05.092 fused_ordering(300) 00:15:05.092 fused_ordering(301) 00:15:05.092 fused_ordering(302) 00:15:05.092 fused_ordering(303) 00:15:05.092 fused_ordering(304) 00:15:05.092 fused_ordering(305) 00:15:05.092 fused_ordering(306) 00:15:05.092 fused_ordering(307) 00:15:05.093 fused_ordering(308) 00:15:05.093 fused_ordering(309) 00:15:05.093 fused_ordering(310) 00:15:05.093 fused_ordering(311) 00:15:05.093 fused_ordering(312) 00:15:05.093 fused_ordering(313) 00:15:05.093 fused_ordering(314) 00:15:05.093 fused_ordering(315) 00:15:05.093 fused_ordering(316) 00:15:05.093 fused_ordering(317) 00:15:05.093 fused_ordering(318) 00:15:05.093 fused_ordering(319) 00:15:05.093 fused_ordering(320) 00:15:05.093 fused_ordering(321) 00:15:05.093 fused_ordering(322) 00:15:05.093 fused_ordering(323) 00:15:05.093 fused_ordering(324) 00:15:05.093 fused_ordering(325) 00:15:05.093 fused_ordering(326) 00:15:05.093 fused_ordering(327) 00:15:05.093 fused_ordering(328) 00:15:05.093 fused_ordering(329) 00:15:05.093 fused_ordering(330) 00:15:05.093 fused_ordering(331) 00:15:05.093 fused_ordering(332) 00:15:05.093 fused_ordering(333) 00:15:05.093 fused_ordering(334) 00:15:05.093 fused_ordering(335) 00:15:05.093 fused_ordering(336) 00:15:05.093 fused_ordering(337) 00:15:05.093 fused_ordering(338) 00:15:05.093 fused_ordering(339) 00:15:05.093 fused_ordering(340) 00:15:05.093 fused_ordering(341) 00:15:05.093 fused_ordering(342) 00:15:05.093 fused_ordering(343) 00:15:05.093 fused_ordering(344) 00:15:05.093 fused_ordering(345) 00:15:05.093 fused_ordering(346) 00:15:05.093 fused_ordering(347) 00:15:05.093 fused_ordering(348) 00:15:05.093 fused_ordering(349) 00:15:05.093 fused_ordering(350) 00:15:05.093 fused_ordering(351) 00:15:05.093 fused_ordering(352) 00:15:05.093 fused_ordering(353) 00:15:05.093 fused_ordering(354) 00:15:05.093 fused_ordering(355) 00:15:05.093 fused_ordering(356) 00:15:05.093 fused_ordering(357) 00:15:05.093 fused_ordering(358) 00:15:05.093 fused_ordering(359) 00:15:05.093 fused_ordering(360) 00:15:05.093 fused_ordering(361) 00:15:05.093 fused_ordering(362) 00:15:05.093 fused_ordering(363) 00:15:05.093 fused_ordering(364) 00:15:05.093 fused_ordering(365) 00:15:05.093 fused_ordering(366) 00:15:05.093 fused_ordering(367) 00:15:05.093 fused_ordering(368) 00:15:05.093 fused_ordering(369) 00:15:05.093 fused_ordering(370) 00:15:05.093 fused_ordering(371) 00:15:05.093 fused_ordering(372) 00:15:05.093 fused_ordering(373) 00:15:05.093 fused_ordering(374) 00:15:05.093 fused_ordering(375) 00:15:05.093 fused_ordering(376) 00:15:05.093 fused_ordering(377) 00:15:05.093 fused_ordering(378) 00:15:05.093 fused_ordering(379) 00:15:05.093 fused_ordering(380) 00:15:05.093 fused_ordering(381) 00:15:05.093 fused_ordering(382) 00:15:05.093 fused_ordering(383) 00:15:05.093 fused_ordering(384) 00:15:05.093 fused_ordering(385) 00:15:05.093 fused_ordering(386) 00:15:05.093 fused_ordering(387) 00:15:05.093 fused_ordering(388) 00:15:05.093 fused_ordering(389) 00:15:05.093 fused_ordering(390) 00:15:05.093 fused_ordering(391) 00:15:05.093 fused_ordering(392) 00:15:05.093 fused_ordering(393) 00:15:05.093 fused_ordering(394) 00:15:05.093 fused_ordering(395) 00:15:05.093 fused_ordering(396) 00:15:05.093 fused_ordering(397) 00:15:05.093 fused_ordering(398) 00:15:05.093 fused_ordering(399) 00:15:05.093 fused_ordering(400) 00:15:05.093 fused_ordering(401) 00:15:05.093 fused_ordering(402) 00:15:05.093 fused_ordering(403) 00:15:05.093 fused_ordering(404) 00:15:05.093 fused_ordering(405) 00:15:05.093 fused_ordering(406) 00:15:05.093 fused_ordering(407) 00:15:05.093 fused_ordering(408) 00:15:05.093 fused_ordering(409) 00:15:05.093 fused_ordering(410) 00:15:05.354 fused_ordering(411) 00:15:05.354 fused_ordering(412) 00:15:05.354 fused_ordering(413) 00:15:05.354 fused_ordering(414) 00:15:05.354 fused_ordering(415) 00:15:05.354 fused_ordering(416) 00:15:05.354 fused_ordering(417) 00:15:05.354 fused_ordering(418) 00:15:05.354 fused_ordering(419) 00:15:05.354 fused_ordering(420) 00:15:05.354 fused_ordering(421) 00:15:05.354 fused_ordering(422) 00:15:05.354 fused_ordering(423) 00:15:05.354 fused_ordering(424) 00:15:05.354 fused_ordering(425) 00:15:05.354 fused_ordering(426) 00:15:05.354 fused_ordering(427) 00:15:05.354 fused_ordering(428) 00:15:05.354 fused_ordering(429) 00:15:05.354 fused_ordering(430) 00:15:05.354 fused_ordering(431) 00:15:05.354 fused_ordering(432) 00:15:05.354 fused_ordering(433) 00:15:05.354 fused_ordering(434) 00:15:05.354 fused_ordering(435) 00:15:05.354 fused_ordering(436) 00:15:05.354 fused_ordering(437) 00:15:05.354 fused_ordering(438) 00:15:05.354 fused_ordering(439) 00:15:05.354 fused_ordering(440) 00:15:05.354 fused_ordering(441) 00:15:05.354 fused_ordering(442) 00:15:05.354 fused_ordering(443) 00:15:05.354 fused_ordering(444) 00:15:05.354 fused_ordering(445) 00:15:05.354 fused_ordering(446) 00:15:05.354 fused_ordering(447) 00:15:05.354 fused_ordering(448) 00:15:05.354 fused_ordering(449) 00:15:05.354 fused_ordering(450) 00:15:05.354 fused_ordering(451) 00:15:05.354 fused_ordering(452) 00:15:05.354 fused_ordering(453) 00:15:05.354 fused_ordering(454) 00:15:05.354 fused_ordering(455) 00:15:05.354 fused_ordering(456) 00:15:05.354 fused_ordering(457) 00:15:05.354 fused_ordering(458) 00:15:05.354 fused_ordering(459) 00:15:05.354 fused_ordering(460) 00:15:05.354 fused_ordering(461) 00:15:05.354 fused_ordering(462) 00:15:05.354 fused_ordering(463) 00:15:05.354 fused_ordering(464) 00:15:05.354 fused_ordering(465) 00:15:05.354 fused_ordering(466) 00:15:05.354 fused_ordering(467) 00:15:05.354 fused_ordering(468) 00:15:05.354 fused_ordering(469) 00:15:05.354 fused_ordering(470) 00:15:05.354 fused_ordering(471) 00:15:05.354 fused_ordering(472) 00:15:05.354 fused_ordering(473) 00:15:05.354 fused_ordering(474) 00:15:05.354 fused_ordering(475) 00:15:05.354 fused_ordering(476) 00:15:05.354 fused_ordering(477) 00:15:05.354 fused_ordering(478) 00:15:05.354 fused_ordering(479) 00:15:05.354 fused_ordering(480) 00:15:05.354 fused_ordering(481) 00:15:05.354 fused_ordering(482) 00:15:05.354 fused_ordering(483) 00:15:05.354 fused_ordering(484) 00:15:05.354 fused_ordering(485) 00:15:05.354 fused_ordering(486) 00:15:05.354 fused_ordering(487) 00:15:05.354 fused_ordering(488) 00:15:05.354 fused_ordering(489) 00:15:05.354 fused_ordering(490) 00:15:05.354 fused_ordering(491) 00:15:05.354 fused_ordering(492) 00:15:05.354 fused_ordering(493) 00:15:05.354 fused_ordering(494) 00:15:05.354 fused_ordering(495) 00:15:05.354 fused_ordering(496) 00:15:05.354 fused_ordering(497) 00:15:05.354 fused_ordering(498) 00:15:05.354 fused_ordering(499) 00:15:05.354 fused_ordering(500) 00:15:05.354 fused_ordering(501) 00:15:05.354 fused_ordering(502) 00:15:05.354 fused_ordering(503) 00:15:05.354 fused_ordering(504) 00:15:05.354 fused_ordering(505) 00:15:05.354 fused_ordering(506) 00:15:05.354 fused_ordering(507) 00:15:05.354 fused_ordering(508) 00:15:05.354 fused_ordering(509) 00:15:05.354 fused_ordering(510) 00:15:05.354 fused_ordering(511) 00:15:05.354 fused_ordering(512) 00:15:05.354 fused_ordering(513) 00:15:05.354 fused_ordering(514) 00:15:05.354 fused_ordering(515) 00:15:05.354 fused_ordering(516) 00:15:05.354 fused_ordering(517) 00:15:05.354 fused_ordering(518) 00:15:05.354 fused_ordering(519) 00:15:05.354 fused_ordering(520) 00:15:05.354 fused_ordering(521) 00:15:05.354 fused_ordering(522) 00:15:05.354 fused_ordering(523) 00:15:05.354 fused_ordering(524) 00:15:05.354 fused_ordering(525) 00:15:05.355 fused_ordering(526) 00:15:05.355 fused_ordering(527) 00:15:05.355 fused_ordering(528) 00:15:05.355 fused_ordering(529) 00:15:05.355 fused_ordering(530) 00:15:05.355 fused_ordering(531) 00:15:05.355 fused_ordering(532) 00:15:05.355 fused_ordering(533) 00:15:05.355 fused_ordering(534) 00:15:05.355 fused_ordering(535) 00:15:05.355 fused_ordering(536) 00:15:05.355 fused_ordering(537) 00:15:05.355 fused_ordering(538) 00:15:05.355 fused_ordering(539) 00:15:05.355 fused_ordering(540) 00:15:05.355 fused_ordering(541) 00:15:05.355 fused_ordering(542) 00:15:05.355 fused_ordering(543) 00:15:05.355 fused_ordering(544) 00:15:05.355 fused_ordering(545) 00:15:05.355 fused_ordering(546) 00:15:05.355 fused_ordering(547) 00:15:05.355 fused_ordering(548) 00:15:05.355 fused_ordering(549) 00:15:05.355 fused_ordering(550) 00:15:05.355 fused_ordering(551) 00:15:05.355 fused_ordering(552) 00:15:05.355 fused_ordering(553) 00:15:05.355 fused_ordering(554) 00:15:05.355 fused_ordering(555) 00:15:05.355 fused_ordering(556) 00:15:05.355 fused_ordering(557) 00:15:05.355 fused_ordering(558) 00:15:05.355 fused_ordering(559) 00:15:05.355 fused_ordering(560) 00:15:05.355 fused_ordering(561) 00:15:05.355 fused_ordering(562) 00:15:05.355 fused_ordering(563) 00:15:05.355 fused_ordering(564) 00:15:05.355 fused_ordering(565) 00:15:05.355 fused_ordering(566) 00:15:05.355 fused_ordering(567) 00:15:05.355 fused_ordering(568) 00:15:05.355 fused_ordering(569) 00:15:05.355 fused_ordering(570) 00:15:05.355 fused_ordering(571) 00:15:05.355 fused_ordering(572) 00:15:05.355 fused_ordering(573) 00:15:05.355 fused_ordering(574) 00:15:05.355 fused_ordering(575) 00:15:05.355 fused_ordering(576) 00:15:05.355 fused_ordering(577) 00:15:05.355 fused_ordering(578) 00:15:05.355 fused_ordering(579) 00:15:05.355 fused_ordering(580) 00:15:05.355 fused_ordering(581) 00:15:05.355 fused_ordering(582) 00:15:05.355 fused_ordering(583) 00:15:05.355 fused_ordering(584) 00:15:05.355 fused_ordering(585) 00:15:05.355 fused_ordering(586) 00:15:05.355 fused_ordering(587) 00:15:05.355 fused_ordering(588) 00:15:05.355 fused_ordering(589) 00:15:05.355 fused_ordering(590) 00:15:05.355 fused_ordering(591) 00:15:05.355 fused_ordering(592) 00:15:05.355 fused_ordering(593) 00:15:05.355 fused_ordering(594) 00:15:05.355 fused_ordering(595) 00:15:05.355 fused_ordering(596) 00:15:05.355 fused_ordering(597) 00:15:05.355 fused_ordering(598) 00:15:05.355 fused_ordering(599) 00:15:05.355 fused_ordering(600) 00:15:05.355 fused_ordering(601) 00:15:05.355 fused_ordering(602) 00:15:05.355 fused_ordering(603) 00:15:05.355 fused_ordering(604) 00:15:05.355 fused_ordering(605) 00:15:05.355 fused_ordering(606) 00:15:05.355 fused_ordering(607) 00:15:05.355 fused_ordering(608) 00:15:05.355 fused_ordering(609) 00:15:05.355 fused_ordering(610) 00:15:05.355 fused_ordering(611) 00:15:05.355 fused_ordering(612) 00:15:05.355 fused_ordering(613) 00:15:05.355 fused_ordering(614) 00:15:05.355 fused_ordering(615) 00:15:05.927 fused_ordering(616) 00:15:05.927 fused_ordering(617) 00:15:05.927 fused_ordering(618) 00:15:05.927 fused_ordering(619) 00:15:05.927 fused_ordering(620) 00:15:05.927 fused_ordering(621) 00:15:05.927 fused_ordering(622) 00:15:05.927 fused_ordering(623) 00:15:05.927 fused_ordering(624) 00:15:05.927 fused_ordering(625) 00:15:05.927 fused_ordering(626) 00:15:05.927 fused_ordering(627) 00:15:05.927 fused_ordering(628) 00:15:05.927 fused_ordering(629) 00:15:05.927 fused_ordering(630) 00:15:05.927 fused_ordering(631) 00:15:05.927 fused_ordering(632) 00:15:05.927 fused_ordering(633) 00:15:05.927 fused_ordering(634) 00:15:05.927 fused_ordering(635) 00:15:05.927 fused_ordering(636) 00:15:05.927 fused_ordering(637) 00:15:05.927 fused_ordering(638) 00:15:05.927 fused_ordering(639) 00:15:05.927 fused_ordering(640) 00:15:05.927 fused_ordering(641) 00:15:05.927 fused_ordering(642) 00:15:05.927 fused_ordering(643) 00:15:05.927 fused_ordering(644) 00:15:05.927 fused_ordering(645) 00:15:05.927 fused_ordering(646) 00:15:05.927 fused_ordering(647) 00:15:05.927 fused_ordering(648) 00:15:05.927 fused_ordering(649) 00:15:05.927 fused_ordering(650) 00:15:05.927 fused_ordering(651) 00:15:05.927 fused_ordering(652) 00:15:05.927 fused_ordering(653) 00:15:05.927 fused_ordering(654) 00:15:05.927 fused_ordering(655) 00:15:05.927 fused_ordering(656) 00:15:05.927 fused_ordering(657) 00:15:05.927 fused_ordering(658) 00:15:05.927 fused_ordering(659) 00:15:05.927 fused_ordering(660) 00:15:05.927 fused_ordering(661) 00:15:05.927 fused_ordering(662) 00:15:05.927 fused_ordering(663) 00:15:05.927 fused_ordering(664) 00:15:05.927 fused_ordering(665) 00:15:05.927 fused_ordering(666) 00:15:05.927 fused_ordering(667) 00:15:05.927 fused_ordering(668) 00:15:05.927 fused_ordering(669) 00:15:05.927 fused_ordering(670) 00:15:05.927 fused_ordering(671) 00:15:05.927 fused_ordering(672) 00:15:05.927 fused_ordering(673) 00:15:05.927 fused_ordering(674) 00:15:05.927 fused_ordering(675) 00:15:05.927 fused_ordering(676) 00:15:05.927 fused_ordering(677) 00:15:05.927 fused_ordering(678) 00:15:05.927 fused_ordering(679) 00:15:05.927 fused_ordering(680) 00:15:05.927 fused_ordering(681) 00:15:05.927 fused_ordering(682) 00:15:05.927 fused_ordering(683) 00:15:05.927 fused_ordering(684) 00:15:05.927 fused_ordering(685) 00:15:05.927 fused_ordering(686) 00:15:05.927 fused_ordering(687) 00:15:05.927 fused_ordering(688) 00:15:05.927 fused_ordering(689) 00:15:05.927 fused_ordering(690) 00:15:05.927 fused_ordering(691) 00:15:05.927 fused_ordering(692) 00:15:05.927 fused_ordering(693) 00:15:05.927 fused_ordering(694) 00:15:05.927 fused_ordering(695) 00:15:05.927 fused_ordering(696) 00:15:05.927 fused_ordering(697) 00:15:05.927 fused_ordering(698) 00:15:05.927 fused_ordering(699) 00:15:05.927 fused_ordering(700) 00:15:05.927 fused_ordering(701) 00:15:05.927 fused_ordering(702) 00:15:05.927 fused_ordering(703) 00:15:05.927 fused_ordering(704) 00:15:05.927 fused_ordering(705) 00:15:05.927 fused_ordering(706) 00:15:05.927 fused_ordering(707) 00:15:05.927 fused_ordering(708) 00:15:05.927 fused_ordering(709) 00:15:05.927 fused_ordering(710) 00:15:05.927 fused_ordering(711) 00:15:05.927 fused_ordering(712) 00:15:05.927 fused_ordering(713) 00:15:05.927 fused_ordering(714) 00:15:05.927 fused_ordering(715) 00:15:05.927 fused_ordering(716) 00:15:05.927 fused_ordering(717) 00:15:05.927 fused_ordering(718) 00:15:05.927 fused_ordering(719) 00:15:05.927 fused_ordering(720) 00:15:05.927 fused_ordering(721) 00:15:05.927 fused_ordering(722) 00:15:05.927 fused_ordering(723) 00:15:05.927 fused_ordering(724) 00:15:05.927 fused_ordering(725) 00:15:05.927 fused_ordering(726) 00:15:05.927 fused_ordering(727) 00:15:05.927 fused_ordering(728) 00:15:05.927 fused_ordering(729) 00:15:05.927 fused_ordering(730) 00:15:05.927 fused_ordering(731) 00:15:05.927 fused_ordering(732) 00:15:05.927 fused_ordering(733) 00:15:05.927 fused_ordering(734) 00:15:05.927 fused_ordering(735) 00:15:05.927 fused_ordering(736) 00:15:05.927 fused_ordering(737) 00:15:05.927 fused_ordering(738) 00:15:05.927 fused_ordering(739) 00:15:05.927 fused_ordering(740) 00:15:05.927 fused_ordering(741) 00:15:05.927 fused_ordering(742) 00:15:05.927 fused_ordering(743) 00:15:05.927 fused_ordering(744) 00:15:05.927 fused_ordering(745) 00:15:05.927 fused_ordering(746) 00:15:05.927 fused_ordering(747) 00:15:05.927 fused_ordering(748) 00:15:05.927 fused_ordering(749) 00:15:05.927 fused_ordering(750) 00:15:05.927 fused_ordering(751) 00:15:05.927 fused_ordering(752) 00:15:05.927 fused_ordering(753) 00:15:05.927 fused_ordering(754) 00:15:05.927 fused_ordering(755) 00:15:05.927 fused_ordering(756) 00:15:05.927 fused_ordering(757) 00:15:05.927 fused_ordering(758) 00:15:05.927 fused_ordering(759) 00:15:05.927 fused_ordering(760) 00:15:05.927 fused_ordering(761) 00:15:05.927 fused_ordering(762) 00:15:05.927 fused_ordering(763) 00:15:05.927 fused_ordering(764) 00:15:05.927 fused_ordering(765) 00:15:05.927 fused_ordering(766) 00:15:05.927 fused_ordering(767) 00:15:05.927 fused_ordering(768) 00:15:05.927 fused_ordering(769) 00:15:05.927 fused_ordering(770) 00:15:05.927 fused_ordering(771) 00:15:05.927 fused_ordering(772) 00:15:05.927 fused_ordering(773) 00:15:05.927 fused_ordering(774) 00:15:05.927 fused_ordering(775) 00:15:05.927 fused_ordering(776) 00:15:05.927 fused_ordering(777) 00:15:05.927 fused_ordering(778) 00:15:05.927 fused_ordering(779) 00:15:05.927 fused_ordering(780) 00:15:05.927 fused_ordering(781) 00:15:05.927 fused_ordering(782) 00:15:05.927 fused_ordering(783) 00:15:05.927 fused_ordering(784) 00:15:05.927 fused_ordering(785) 00:15:05.927 fused_ordering(786) 00:15:05.927 fused_ordering(787) 00:15:05.927 fused_ordering(788) 00:15:05.927 fused_ordering(789) 00:15:05.927 fused_ordering(790) 00:15:05.927 fused_ordering(791) 00:15:05.927 fused_ordering(792) 00:15:05.927 fused_ordering(793) 00:15:05.927 fused_ordering(794) 00:15:05.927 fused_ordering(795) 00:15:05.927 fused_ordering(796) 00:15:05.927 fused_ordering(797) 00:15:05.927 fused_ordering(798) 00:15:05.927 fused_ordering(799) 00:15:05.927 fused_ordering(800) 00:15:05.927 fused_ordering(801) 00:15:05.927 fused_ordering(802) 00:15:05.927 fused_ordering(803) 00:15:05.927 fused_ordering(804) 00:15:05.927 fused_ordering(805) 00:15:05.927 fused_ordering(806) 00:15:05.927 fused_ordering(807) 00:15:05.927 fused_ordering(808) 00:15:05.927 fused_ordering(809) 00:15:05.927 fused_ordering(810) 00:15:05.927 fused_ordering(811) 00:15:05.927 fused_ordering(812) 00:15:05.927 fused_ordering(813) 00:15:05.927 fused_ordering(814) 00:15:05.927 fused_ordering(815) 00:15:05.927 fused_ordering(816) 00:15:05.927 fused_ordering(817) 00:15:05.927 fused_ordering(818) 00:15:05.927 fused_ordering(819) 00:15:05.927 fused_ordering(820) 00:15:06.499 fused_ordering(821) 00:15:06.499 fused_ordering(822) 00:15:06.499 fused_ordering(823) 00:15:06.499 fused_ordering(824) 00:15:06.499 fused_ordering(825) 00:15:06.499 fused_ordering(826) 00:15:06.499 fused_ordering(827) 00:15:06.499 fused_ordering(828) 00:15:06.499 fused_ordering(829) 00:15:06.499 fused_ordering(830) 00:15:06.499 fused_ordering(831) 00:15:06.499 fused_ordering(832) 00:15:06.499 fused_ordering(833) 00:15:06.499 fused_ordering(834) 00:15:06.499 fused_ordering(835) 00:15:06.499 fused_ordering(836) 00:15:06.499 fused_ordering(837) 00:15:06.499 fused_ordering(838) 00:15:06.499 fused_ordering(839) 00:15:06.499 fused_ordering(840) 00:15:06.499 fused_ordering(841) 00:15:06.499 fused_ordering(842) 00:15:06.499 fused_ordering(843) 00:15:06.499 fused_ordering(844) 00:15:06.499 fused_ordering(845) 00:15:06.499 fused_ordering(846) 00:15:06.499 fused_ordering(847) 00:15:06.499 fused_ordering(848) 00:15:06.499 fused_ordering(849) 00:15:06.499 fused_ordering(850) 00:15:06.499 fused_ordering(851) 00:15:06.499 fused_ordering(852) 00:15:06.499 fused_ordering(853) 00:15:06.499 fused_ordering(854) 00:15:06.499 fused_ordering(855) 00:15:06.499 fused_ordering(856) 00:15:06.499 fused_ordering(857) 00:15:06.499 fused_ordering(858) 00:15:06.499 fused_ordering(859) 00:15:06.499 fused_ordering(860) 00:15:06.499 fused_ordering(861) 00:15:06.499 fused_ordering(862) 00:15:06.499 fused_ordering(863) 00:15:06.499 fused_ordering(864) 00:15:06.499 fused_ordering(865) 00:15:06.499 fused_ordering(866) 00:15:06.499 fused_ordering(867) 00:15:06.499 fused_ordering(868) 00:15:06.499 fused_ordering(869) 00:15:06.499 fused_ordering(870) 00:15:06.499 fused_ordering(871) 00:15:06.499 fused_ordering(872) 00:15:06.499 fused_ordering(873) 00:15:06.499 fused_ordering(874) 00:15:06.499 fused_ordering(875) 00:15:06.499 fused_ordering(876) 00:15:06.499 fused_ordering(877) 00:15:06.499 fused_ordering(878) 00:15:06.499 fused_ordering(879) 00:15:06.499 fused_ordering(880) 00:15:06.499 fused_ordering(881) 00:15:06.499 fused_ordering(882) 00:15:06.499 fused_ordering(883) 00:15:06.499 fused_ordering(884) 00:15:06.499 fused_ordering(885) 00:15:06.499 fused_ordering(886) 00:15:06.499 fused_ordering(887) 00:15:06.499 fused_ordering(888) 00:15:06.499 fused_ordering(889) 00:15:06.499 fused_ordering(890) 00:15:06.499 fused_ordering(891) 00:15:06.499 fused_ordering(892) 00:15:06.499 fused_ordering(893) 00:15:06.499 fused_ordering(894) 00:15:06.499 fused_ordering(895) 00:15:06.499 fused_ordering(896) 00:15:06.499 fused_ordering(897) 00:15:06.499 fused_ordering(898) 00:15:06.499 fused_ordering(899) 00:15:06.499 fused_ordering(900) 00:15:06.499 fused_ordering(901) 00:15:06.499 fused_ordering(902) 00:15:06.499 fused_ordering(903) 00:15:06.499 fused_ordering(904) 00:15:06.499 fused_ordering(905) 00:15:06.499 fused_ordering(906) 00:15:06.499 fused_ordering(907) 00:15:06.499 fused_ordering(908) 00:15:06.499 fused_ordering(909) 00:15:06.499 fused_ordering(910) 00:15:06.499 fused_ordering(911) 00:15:06.499 fused_ordering(912) 00:15:06.499 fused_ordering(913) 00:15:06.499 fused_ordering(914) 00:15:06.499 fused_ordering(915) 00:15:06.499 fused_ordering(916) 00:15:06.499 fused_ordering(917) 00:15:06.499 fused_ordering(918) 00:15:06.499 fused_ordering(919) 00:15:06.499 fused_ordering(920) 00:15:06.499 fused_ordering(921) 00:15:06.499 fused_ordering(922) 00:15:06.499 fused_ordering(923) 00:15:06.499 fused_ordering(924) 00:15:06.499 fused_ordering(925) 00:15:06.499 fused_ordering(926) 00:15:06.499 fused_ordering(927) 00:15:06.499 fused_ordering(928) 00:15:06.499 fused_ordering(929) 00:15:06.499 fused_ordering(930) 00:15:06.499 fused_ordering(931) 00:15:06.499 fused_ordering(932) 00:15:06.499 fused_ordering(933) 00:15:06.499 fused_ordering(934) 00:15:06.499 fused_ordering(935) 00:15:06.499 fused_ordering(936) 00:15:06.499 fused_ordering(937) 00:15:06.499 fused_ordering(938) 00:15:06.499 fused_ordering(939) 00:15:06.499 fused_ordering(940) 00:15:06.499 fused_ordering(941) 00:15:06.499 fused_ordering(942) 00:15:06.499 fused_ordering(943) 00:15:06.499 fused_ordering(944) 00:15:06.499 fused_ordering(945) 00:15:06.499 fused_ordering(946) 00:15:06.499 fused_ordering(947) 00:15:06.499 fused_ordering(948) 00:15:06.499 fused_ordering(949) 00:15:06.499 fused_ordering(950) 00:15:06.499 fused_ordering(951) 00:15:06.499 fused_ordering(952) 00:15:06.499 fused_ordering(953) 00:15:06.499 fused_ordering(954) 00:15:06.499 fused_ordering(955) 00:15:06.499 fused_ordering(956) 00:15:06.499 fused_ordering(957) 00:15:06.499 fused_ordering(958) 00:15:06.499 fused_ordering(959) 00:15:06.499 fused_ordering(960) 00:15:06.499 fused_ordering(961) 00:15:06.499 fused_ordering(962) 00:15:06.499 fused_ordering(963) 00:15:06.499 fused_ordering(964) 00:15:06.499 fused_ordering(965) 00:15:06.499 fused_ordering(966) 00:15:06.499 fused_ordering(967) 00:15:06.499 fused_ordering(968) 00:15:06.499 fused_ordering(969) 00:15:06.499 fused_ordering(970) 00:15:06.499 fused_ordering(971) 00:15:06.499 fused_ordering(972) 00:15:06.499 fused_ordering(973) 00:15:06.499 fused_ordering(974) 00:15:06.499 fused_ordering(975) 00:15:06.499 fused_ordering(976) 00:15:06.499 fused_ordering(977) 00:15:06.499 fused_ordering(978) 00:15:06.499 fused_ordering(979) 00:15:06.499 fused_ordering(980) 00:15:06.499 fused_ordering(981) 00:15:06.499 fused_ordering(982) 00:15:06.499 fused_ordering(983) 00:15:06.499 fused_ordering(984) 00:15:06.499 fused_ordering(985) 00:15:06.499 fused_ordering(986) 00:15:06.499 fused_ordering(987) 00:15:06.499 fused_ordering(988) 00:15:06.499 fused_ordering(989) 00:15:06.499 fused_ordering(990) 00:15:06.499 fused_ordering(991) 00:15:06.499 fused_ordering(992) 00:15:06.499 fused_ordering(993) 00:15:06.499 fused_ordering(994) 00:15:06.499 fused_ordering(995) 00:15:06.499 fused_ordering(996) 00:15:06.499 fused_ordering(997) 00:15:06.499 fused_ordering(998) 00:15:06.499 fused_ordering(999) 00:15:06.499 fused_ordering(1000) 00:15:06.499 fused_ordering(1001) 00:15:06.499 fused_ordering(1002) 00:15:06.499 fused_ordering(1003) 00:15:06.499 fused_ordering(1004) 00:15:06.499 fused_ordering(1005) 00:15:06.499 fused_ordering(1006) 00:15:06.499 fused_ordering(1007) 00:15:06.499 fused_ordering(1008) 00:15:06.499 fused_ordering(1009) 00:15:06.499 fused_ordering(1010) 00:15:06.499 fused_ordering(1011) 00:15:06.499 fused_ordering(1012) 00:15:06.499 fused_ordering(1013) 00:15:06.499 fused_ordering(1014) 00:15:06.499 fused_ordering(1015) 00:15:06.499 fused_ordering(1016) 00:15:06.499 fused_ordering(1017) 00:15:06.499 fused_ordering(1018) 00:15:06.499 fused_ordering(1019) 00:15:06.499 fused_ordering(1020) 00:15:06.499 fused_ordering(1021) 00:15:06.499 fused_ordering(1022) 00:15:06.499 fused_ordering(1023) 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.499 rmmod nvme_tcp 00:15:06.499 rmmod nvme_fabrics 00:15:06.499 rmmod nvme_keyring 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3887758 ']' 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3887758 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3887758 ']' 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3887758 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:06.499 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3887758 00:15:06.762 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:06.762 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:06.762 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3887758' 00:15:06.762 killing process with pid 3887758 00:15:06.762 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3887758 00:15:06.762 01:33:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3887758 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.762 01:33:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.305 01:33:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.305 00:15:09.305 real 0m13.965s 00:15:09.305 user 0m7.176s 00:15:09.305 sys 0m7.473s 00:15:09.305 01:33:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:09.305 01:33:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:09.305 ************************************ 00:15:09.305 END TEST nvmf_fused_ordering 00:15:09.305 ************************************ 00:15:09.305 01:33:35 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:09.305 01:33:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:09.305 01:33:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.305 01:33:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.305 ************************************ 00:15:09.305 START TEST nvmf_delete_subsystem 00:15:09.305 ************************************ 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:09.305 * Looking for test storage... 00:15:09.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.305 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.306 01:33:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:17.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:17.446 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:17.446 Found net devices under 0000:31:00.0: cvl_0_0 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:17.446 Found net devices under 0000:31:00.1: cvl_0_1 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:15:17.446 00:15:17.446 --- 10.0.0.2 ping statistics --- 00:15:17.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.446 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:15:17.446 00:15:17.446 --- 10.0.0.1 ping statistics --- 00:15:17.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.446 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3893126 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3893126 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3893126 ']' 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:17.446 01:33:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:17.446 [2024-07-12 01:33:43.644839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:17.446 [2024-07-12 01:33:43.644903] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.446 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.446 [2024-07-12 01:33:43.723468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:17.446 [2024-07-12 01:33:43.762013] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.447 [2024-07-12 01:33:43.762056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.447 [2024-07-12 01:33:43.762064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.447 [2024-07-12 01:33:43.762070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.447 [2024-07-12 01:33:43.762076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.447 [2024-07-12 01:33:43.762217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.447 [2024-07-12 01:33:43.762219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 [2024-07-12 01:33:44.456976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 [2024-07-12 01:33:44.473117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 NULL1 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 Delay0 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3893158 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:18.385 01:33:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:18.385 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.386 [2024-07-12 01:33:44.557729] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:20.373 01:33:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.373 01:33:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.373 01:33:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Read completed with error (sct=0, sc=8) 00:15:20.646 Write completed with error (sct=0, sc=8) 00:15:20.646 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 starting I/O failed: -6 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Read completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 Write completed with error (sct=0, sc=8) 00:15:20.647 starting I/O failed: -6 00:15:20.647 [2024-07-12 01:33:46.816522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d6c000c00 is same with the state(5) to be set 00:15:21.589 [2024-07-12 01:33:47.782227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fcbb70 is same with the state(5) to be set 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 [2024-07-12 01:33:47.815125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a00 is same with the state(5) to be set 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 [2024-07-12 01:33:47.815732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3640 is same with the state(5) to be set 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 [2024-07-12 01:33:47.818567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d6c00bfe0 is same with the state(5) to be set 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Write completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 Read completed with error (sct=0, sc=8) 00:15:21.589 [2024-07-12 01:33:47.818690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4d6c00c780 is same with the state(5) to be set 00:15:21.589 Initializing NVMe Controllers 00:15:21.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.589 Controller IO queue size 128, less than required. 00:15:21.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:21.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:21.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:21.589 Initialization complete. Launching workers. 00:15:21.589 ======================================================== 00:15:21.589 Latency(us) 00:15:21.589 Device Information : IOPS MiB/s Average min max 00:15:21.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.38 0.09 923946.05 258.41 1007226.47 00:15:21.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 180.37 0.09 913015.39 417.06 1009996.04 00:15:21.589 ======================================================== 00:15:21.590 Total : 356.75 0.17 918419.65 258.41 1009996.04 00:15:21.590 00:15:21.590 [2024-07-12 01:33:47.819172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcbb70 (9): Bad file descriptor 00:15:21.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3893158 00:15:21.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3893158) - No such process 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3893158 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3893158 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3893158 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.590 [2024-07-12 01:33:47.846391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3893833 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:21.590 01:33:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:21.590 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.590 [2024-07-12 01:33:47.917551] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:22.161 01:33:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:22.161 01:33:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:22.161 01:33:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:22.730 01:33:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:22.730 01:33:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:22.730 01:33:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:23.306 01:33:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:23.306 01:33:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:23.306 01:33:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:23.567 01:33:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:23.567 01:33:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:23.567 01:33:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.138 01:33:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:24.138 01:33:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:24.138 01:33:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.709 01:33:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:24.709 01:33:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:24.709 01:33:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.709 Initializing NVMe Controllers 00:15:24.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.709 Controller IO queue size 128, less than required. 00:15:24.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:24.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:24.709 Initialization complete. Launching workers. 00:15:24.709 ======================================================== 00:15:24.709 Latency(us) 00:15:24.709 Device Information : IOPS MiB/s Average min max 00:15:24.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002006.04 1000171.74 1005745.93 00:15:24.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003465.85 1000236.03 1042400.90 00:15:24.709 ======================================================== 00:15:24.709 Total : 256.00 0.12 1002735.94 1000171.74 1042400.90 00:15:24.709 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3893833 00:15:25.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3893833) - No such process 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3893833 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.278 rmmod nvme_tcp 00:15:25.278 rmmod nvme_fabrics 00:15:25.278 rmmod nvme_keyring 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3893126 ']' 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3893126 00:15:25.278 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3893126 ']' 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3893126 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3893126 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3893126' 00:15:25.279 killing process with pid 3893126 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3893126 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3893126 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.279 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.539 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.539 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.540 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.540 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.540 01:33:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.453 01:33:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.453 00:15:27.453 real 0m18.539s 00:15:27.453 user 0m30.218s 00:15:27.453 sys 0m7.068s 00:15:27.453 01:33:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:27.453 01:33:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:27.453 ************************************ 00:15:27.453 END TEST nvmf_delete_subsystem 00:15:27.453 ************************************ 00:15:27.453 01:33:53 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:27.453 01:33:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:27.453 01:33:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:27.453 01:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.453 ************************************ 00:15:27.453 START TEST nvmf_ns_masking 00:15:27.453 ************************************ 00:15:27.453 01:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:27.715 * Looking for test storage... 00:15:27.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.715 01:33:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=b85d7e3e-45cc-4bc5-93a7-1adbfb87e2dc 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.716 01:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.856 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:35.857 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:35.857 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:35.857 Found net devices under 0000:31:00.0: cvl_0_0 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:35.857 Found net devices under 0000:31:00.1: cvl_0_1 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.857 01:34:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.857 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:15:35.857 00:15:35.857 --- 10.0.0.2 ping statistics --- 00:15:35.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.857 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:15:35.857 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:15:35.857 00:15:35.857 --- 10.0.0.1 ping statistics --- 00:15:35.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.858 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3899302 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3899302 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3899302 ']' 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:35.858 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:35.858 [2024-07-12 01:34:02.117729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:35.858 [2024-07-12 01:34:02.117793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.858 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.858 [2024-07-12 01:34:02.200751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.119 [2024-07-12 01:34:02.241090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.119 [2024-07-12 01:34:02.241135] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.119 [2024-07-12 01:34:02.241143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.119 [2024-07-12 01:34:02.241149] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.119 [2024-07-12 01:34:02.241155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.119 [2024-07-12 01:34:02.241271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.119 [2024-07-12 01:34:02.241486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.119 [2024-07-12 01:34:02.241486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.119 [2024-07-12 01:34:02.241336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.701 01:34:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:36.965 [2024-07-12 01:34:03.081266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.965 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:36.965 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:36.965 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.965 Malloc1 00:15:36.965 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.226 Malloc2 00:15:37.226 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.486 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:37.486 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.747 [2024-07-12 01:34:03.925474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.747 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:37.747 01:34:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b85d7e3e-45cc-4bc5-93a7-1adbfb87e2dc -a 10.0.0.2 -s 4420 -i 4 00:15:38.008 01:34:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.008 01:34:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:38.008 01:34:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.008 01:34:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:38.008 01:34:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:39.921 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:39.922 [ 0]:0x1 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11f732f7b66a4fccb33019bdc2d3c80c 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11f732f7b66a4fccb33019bdc2d3c80c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:39.922 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:40.182 [ 0]:0x1 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11f732f7b66a4fccb33019bdc2d3c80c 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11f732f7b66a4fccb33019bdc2d3c80c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:40.182 [ 1]:0x2 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.182 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:40.443 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:40.443 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.443 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:40.443 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.703 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.703 01:34:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b85d7e3e-45cc-4bc5-93a7-1adbfb87e2dc -a 10.0.0.2 -s 4420 -i 4 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:40.964 01:34:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:43.508 [ 0]:0x2 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:43.508 [ 0]:0x1 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11f732f7b66a4fccb33019bdc2d3c80c 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11f732f7b66a4fccb33019bdc2d3c80c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:43.508 [ 1]:0x2 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.508 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:43.769 [ 0]:0x2 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:43.769 01:34:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.769 01:34:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b85d7e3e-45cc-4bc5-93a7-1adbfb87e2dc -a 10.0.0.2 -s 4420 -i 4 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:44.029 01:34:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:46.573 [ 0]:0x1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11f732f7b66a4fccb33019bdc2d3c80c 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11f732f7b66a4fccb33019bdc2d3c80c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:46.573 [ 1]:0x2 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:46.573 [ 0]:0x2 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:46.573 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:46.573 [2024-07-12 01:34:12.917506] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:46.573 request: 00:15:46.573 { 00:15:46.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.573 "nsid": 2, 00:15:46.573 "host": "nqn.2016-06.io.spdk:host1", 00:15:46.573 "method": "nvmf_ns_remove_host", 00:15:46.573 "req_id": 1 00:15:46.573 } 00:15:46.573 Got JSON-RPC error response 00:15:46.573 response: 00:15:46.573 { 00:15:46.573 "code": -32602, 00:15:46.573 "message": "Invalid parameters" 00:15:46.573 } 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:46.835 01:34:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:46.835 [ 0]:0x2 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1a6953e31634b07adc31311db1dda21 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1a6953e31634b07adc31311db1dda21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:46.835 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.095 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.095 rmmod nvme_tcp 00:15:47.095 rmmod nvme_fabrics 00:15:47.095 rmmod nvme_keyring 00:15:47.355 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.355 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:47.355 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:47.355 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3899302 ']' 00:15:47.355 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3899302 00:15:47.355 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3899302 ']' 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3899302 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3899302 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3899302' 00:15:47.356 killing process with pid 3899302 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3899302 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3899302 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.356 01:34:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.898 01:34:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.898 00:15:49.898 real 0m21.949s 00:15:49.898 user 0m50.514s 00:15:49.898 sys 0m7.444s 00:15:49.898 01:34:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:49.898 01:34:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:49.898 ************************************ 00:15:49.898 END TEST nvmf_ns_masking 00:15:49.898 ************************************ 00:15:49.898 01:34:15 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:49.898 01:34:15 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:49.898 01:34:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:49.898 01:34:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:49.898 01:34:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.898 ************************************ 00:15:49.898 START TEST nvmf_nvme_cli 00:15:49.898 ************************************ 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:49.898 * Looking for test storage... 00:15:49.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.898 01:34:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:58.034 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:58.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:58.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:58.035 Found net devices under 0000:31:00.0: cvl_0_0 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:58.035 Found net devices under 0000:31:00.1: cvl_0_1 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:58.035 01:34:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:58.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:15:58.035 00:15:58.035 --- 10.0.0.2 ping statistics --- 00:15:58.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.035 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:15:58.035 00:15:58.035 --- 10.0.0.1 ping statistics --- 00:15:58.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.035 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3906348 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3906348 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3906348 ']' 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:58.035 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.035 [2024-07-12 01:34:24.190173] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:58.035 [2024-07-12 01:34:24.190222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.035 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.035 [2024-07-12 01:34:24.265775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.036 [2024-07-12 01:34:24.297683] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.036 [2024-07-12 01:34:24.297721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.036 [2024-07-12 01:34:24.297728] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.036 [2024-07-12 01:34:24.297735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.036 [2024-07-12 01:34:24.297741] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.036 [2024-07-12 01:34:24.297879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.036 [2024-07-12 01:34:24.298017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.036 [2024-07-12 01:34:24.298173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.036 [2024-07-12 01:34:24.298174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.608 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:58.608 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:58.608 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.608 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.608 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 01:34:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.869 01:34:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.869 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 [2024-07-12 01:34:24.998895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 Malloc0 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 Malloc1 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 [2024-07-12 01:34:25.088636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:58.869 00:15:58.869 Discovery Log Number of Records 2, Generation counter 2 00:15:58.869 =====Discovery Log Entry 0====== 00:15:58.869 trtype: tcp 00:15:58.869 adrfam: ipv4 00:15:58.869 subtype: current discovery subsystem 00:15:58.869 treq: not required 00:15:58.869 portid: 0 00:15:58.869 trsvcid: 4420 00:15:58.869 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:58.869 traddr: 10.0.0.2 00:15:58.869 eflags: explicit discovery connections, duplicate discovery information 00:15:58.869 sectype: none 00:15:58.869 =====Discovery Log Entry 1====== 00:15:58.869 trtype: tcp 00:15:58.869 adrfam: ipv4 00:15:58.869 subtype: nvme subsystem 00:15:58.869 treq: not required 00:15:58.869 portid: 0 00:15:58.869 trsvcid: 4420 00:15:58.869 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:58.869 traddr: 10.0.0.2 00:15:58.869 eflags: none 00:15:58.869 sectype: none 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:58.869 01:34:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.780 01:34:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:00.780 01:34:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:16:00.780 01:34:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.780 01:34:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:16:00.780 01:34:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:16:00.780 01:34:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:02.691 /dev/nvme0n1 ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.691 rmmod nvme_tcp 00:16:02.691 rmmod nvme_fabrics 00:16:02.691 rmmod nvme_keyring 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3906348 ']' 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3906348 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3906348 ']' 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3906348 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:02.691 01:34:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3906348 00:16:02.692 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:02.692 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:02.692 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3906348' 00:16:02.692 killing process with pid 3906348 00:16:02.692 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3906348 00:16:02.692 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3906348 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.953 01:34:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.575 01:34:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.575 00:16:05.575 real 0m15.429s 00:16:05.575 user 0m21.813s 00:16:05.575 sys 0m6.540s 00:16:05.575 01:34:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:05.575 01:34:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:05.575 ************************************ 00:16:05.575 END TEST nvmf_nvme_cli 00:16:05.575 ************************************ 00:16:05.575 01:34:31 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:16:05.575 01:34:31 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:05.575 01:34:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:05.575 01:34:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:05.575 01:34:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.575 ************************************ 00:16:05.575 START TEST nvmf_vfio_user 00:16:05.575 ************************************ 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:05.575 * Looking for test storage... 00:16:05.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.575 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3907849 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3907849' 00:16:05.576 Process pid: 3907849 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3907849 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3907849 ']' 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:05.576 01:34:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:05.576 [2024-07-12 01:34:31.524659] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:05.576 [2024-07-12 01:34:31.524729] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.576 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.576 [2024-07-12 01:34:31.598302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.576 [2024-07-12 01:34:31.638572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.576 [2024-07-12 01:34:31.638613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.576 [2024-07-12 01:34:31.638625] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.576 [2024-07-12 01:34:31.638632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.576 [2024-07-12 01:34:31.638637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.576 [2024-07-12 01:34:31.638775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.576 [2024-07-12 01:34:31.638906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.576 [2024-07-12 01:34:31.639061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.576 [2024-07-12 01:34:31.639062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.165 01:34:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:06.166 01:34:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:06.166 01:34:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:07.109 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:07.370 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:07.370 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:07.370 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.370 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:07.370 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:07.370 Malloc1 00:16:07.370 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:07.631 01:34:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:07.893 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:07.893 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.893 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:07.893 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:08.155 Malloc2 00:16:08.155 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:08.416 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:08.416 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:08.680 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:08.680 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:08.680 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.680 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:08.680 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:08.680 01:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:08.680 [2024-07-12 01:34:34.922847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:08.680 [2024-07-12 01:34:34.922886] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908530 ] 00:16:08.680 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.680 [2024-07-12 01:34:34.956862] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:08.680 [2024-07-12 01:34:34.964547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:08.680 [2024-07-12 01:34:34.964566] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f07f4d7f000 00:16:08.680 [2024-07-12 01:34:34.965549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.966541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.967556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.968562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.969566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.970570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.971562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.972574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.680 [2024-07-12 01:34:34.974237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:08.680 [2024-07-12 01:34:34.974248] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f07f3b43000 00:16:08.680 [2024-07-12 01:34:34.975577] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:08.680 [2024-07-12 01:34:34.996392] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:08.680 [2024-07-12 01:34:34.996413] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:08.680 [2024-07-12 01:34:34.998708] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:08.680 [2024-07-12 01:34:34.998749] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:08.680 [2024-07-12 01:34:34.998835] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:08.680 [2024-07-12 01:34:34.998851] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:08.680 [2024-07-12 01:34:34.998857] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:08.680 [2024-07-12 01:34:34.999708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:08.680 [2024-07-12 01:34:34.999723] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:08.680 [2024-07-12 01:34:34.999731] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:08.680 [2024-07-12 01:34:35.000713] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:08.680 [2024-07-12 01:34:35.000722] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:08.680 [2024-07-12 01:34:35.000729] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:08.680 [2024-07-12 01:34:35.001715] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:08.680 [2024-07-12 01:34:35.001723] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:08.680 [2024-07-12 01:34:35.002724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:08.680 [2024-07-12 01:34:35.002731] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:08.680 [2024-07-12 01:34:35.002736] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:08.680 [2024-07-12 01:34:35.002742] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:08.680 [2024-07-12 01:34:35.002848] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:08.680 [2024-07-12 01:34:35.002853] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:08.680 [2024-07-12 01:34:35.002858] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:08.680 [2024-07-12 01:34:35.003731] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:08.680 [2024-07-12 01:34:35.004733] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:08.680 [2024-07-12 01:34:35.005749] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:08.680 [2024-07-12 01:34:35.006746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:08.680 [2024-07-12 01:34:35.006800] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:08.680 [2024-07-12 01:34:35.007755] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:08.680 [2024-07-12 01:34:35.007763] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:08.680 [2024-07-12 01:34:35.007767] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:08.680 [2024-07-12 01:34:35.007788] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:08.680 [2024-07-12 01:34:35.007797] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:08.680 [2024-07-12 01:34:35.007814] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.681 [2024-07-12 01:34:35.007822] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.681 [2024-07-12 01:34:35.007834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.007869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.007880] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:08.681 [2024-07-12 01:34:35.007885] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:08.681 [2024-07-12 01:34:35.007889] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:08.681 [2024-07-12 01:34:35.007894] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:08.681 [2024-07-12 01:34:35.007899] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:08.681 [2024-07-12 01:34:35.007903] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:08.681 [2024-07-12 01:34:35.007908] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.007915] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.007925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.007937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.007947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.681 [2024-07-12 01:34:35.007956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.681 [2024-07-12 01:34:35.007964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.681 [2024-07-12 01:34:35.007972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.681 [2024-07-12 01:34:35.007977] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.007985] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.007994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008008] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:08.681 [2024-07-12 01:34:35.008013] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008020] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008029] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008107] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008115] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008122] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:08.681 [2024-07-12 01:34:35.008126] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:08.681 [2024-07-12 01:34:35.008132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008154] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:08.681 [2024-07-12 01:34:35.008166] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008174] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008181] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.681 [2024-07-12 01:34:35.008185] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.681 [2024-07-12 01:34:35.008191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008214] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008222] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008228] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.681 [2024-07-12 01:34:35.008237] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.681 [2024-07-12 01:34:35.008244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008261] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008267] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008275] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008280] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008285] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008292] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:08.681 [2024-07-12 01:34:35.008296] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:08.681 [2024-07-12 01:34:35.008301] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:08.681 [2024-07-12 01:34:35.008321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008398] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:08.681 [2024-07-12 01:34:35.008403] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:08.681 [2024-07-12 01:34:35.008406] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:08.681 [2024-07-12 01:34:35.008410] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:08.681 [2024-07-12 01:34:35.008416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:08.681 [2024-07-12 01:34:35.008423] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:08.681 [2024-07-12 01:34:35.008427] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:08.681 [2024-07-12 01:34:35.008433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008440] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:08.681 [2024-07-12 01:34:35.008444] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.681 [2024-07-12 01:34:35.008450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008457] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:08.681 [2024-07-12 01:34:35.008461] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:08.681 [2024-07-12 01:34:35.008467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:08.681 [2024-07-12 01:34:35.008474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:08.681 [2024-07-12 01:34:35.008504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:08.681 ===================================================== 00:16:08.681 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:08.681 ===================================================== 00:16:08.681 Controller Capabilities/Features 00:16:08.681 ================================ 00:16:08.681 Vendor ID: 4e58 00:16:08.681 Subsystem Vendor ID: 4e58 00:16:08.681 Serial Number: SPDK1 00:16:08.681 Model Number: SPDK bdev Controller 00:16:08.681 Firmware Version: 24.05.1 00:16:08.681 Recommended Arb Burst: 6 00:16:08.681 IEEE OUI Identifier: 8d 6b 50 00:16:08.681 Multi-path I/O 00:16:08.681 May have multiple subsystem ports: Yes 00:16:08.681 May have multiple controllers: Yes 00:16:08.681 Associated with SR-IOV VF: No 00:16:08.681 Max Data Transfer Size: 131072 00:16:08.682 Max Number of Namespaces: 32 00:16:08.682 Max Number of I/O Queues: 127 00:16:08.682 NVMe Specification Version (VS): 1.3 00:16:08.682 NVMe Specification Version (Identify): 1.3 00:16:08.682 Maximum Queue Entries: 256 00:16:08.682 Contiguous Queues Required: Yes 00:16:08.682 Arbitration Mechanisms Supported 00:16:08.682 Weighted Round Robin: Not Supported 00:16:08.682 Vendor Specific: Not Supported 00:16:08.682 Reset Timeout: 15000 ms 00:16:08.682 Doorbell Stride: 4 bytes 00:16:08.682 NVM Subsystem Reset: Not Supported 00:16:08.682 Command Sets Supported 00:16:08.682 NVM Command Set: Supported 00:16:08.682 Boot Partition: Not Supported 00:16:08.682 Memory Page Size Minimum: 4096 bytes 00:16:08.682 Memory Page Size Maximum: 4096 bytes 00:16:08.682 Persistent Memory Region: Not Supported 00:16:08.682 Optional Asynchronous Events Supported 00:16:08.682 Namespace Attribute Notices: Supported 00:16:08.682 Firmware Activation Notices: Not Supported 00:16:08.682 ANA Change Notices: Not Supported 00:16:08.682 PLE Aggregate Log Change Notices: Not Supported 00:16:08.682 LBA Status Info Alert Notices: Not Supported 00:16:08.682 EGE Aggregate Log Change Notices: Not Supported 00:16:08.682 Normal NVM Subsystem Shutdown event: Not Supported 00:16:08.682 Zone Descriptor Change Notices: Not Supported 00:16:08.682 Discovery Log Change Notices: Not Supported 00:16:08.682 Controller Attributes 00:16:08.682 128-bit Host Identifier: Supported 00:16:08.682 Non-Operational Permissive Mode: Not Supported 00:16:08.682 NVM Sets: Not Supported 00:16:08.682 Read Recovery Levels: Not Supported 00:16:08.682 Endurance Groups: Not Supported 00:16:08.682 Predictable Latency Mode: Not Supported 00:16:08.682 Traffic Based Keep ALive: Not Supported 00:16:08.682 Namespace Granularity: Not Supported 00:16:08.682 SQ Associations: Not Supported 00:16:08.682 UUID List: Not Supported 00:16:08.682 Multi-Domain Subsystem: Not Supported 00:16:08.682 Fixed Capacity Management: Not Supported 00:16:08.682 Variable Capacity Management: Not Supported 00:16:08.682 Delete Endurance Group: Not Supported 00:16:08.682 Delete NVM Set: Not Supported 00:16:08.682 Extended LBA Formats Supported: Not Supported 00:16:08.682 Flexible Data Placement Supported: Not Supported 00:16:08.682 00:16:08.682 Controller Memory Buffer Support 00:16:08.682 ================================ 00:16:08.682 Supported: No 00:16:08.682 00:16:08.682 Persistent Memory Region Support 00:16:08.682 ================================ 00:16:08.682 Supported: No 00:16:08.682 00:16:08.682 Admin Command Set Attributes 00:16:08.682 ============================ 00:16:08.682 Security Send/Receive: Not Supported 00:16:08.682 Format NVM: Not Supported 00:16:08.682 Firmware Activate/Download: Not Supported 00:16:08.682 Namespace Management: Not Supported 00:16:08.682 Device Self-Test: Not Supported 00:16:08.682 Directives: Not Supported 00:16:08.682 NVMe-MI: Not Supported 00:16:08.682 Virtualization Management: Not Supported 00:16:08.682 Doorbell Buffer Config: Not Supported 00:16:08.682 Get LBA Status Capability: Not Supported 00:16:08.682 Command & Feature Lockdown Capability: Not Supported 00:16:08.682 Abort Command Limit: 4 00:16:08.682 Async Event Request Limit: 4 00:16:08.682 Number of Firmware Slots: N/A 00:16:08.682 Firmware Slot 1 Read-Only: N/A 00:16:08.682 Firmware Activation Without Reset: N/A 00:16:08.682 Multiple Update Detection Support: N/A 00:16:08.682 Firmware Update Granularity: No Information Provided 00:16:08.682 Per-Namespace SMART Log: No 00:16:08.682 Asymmetric Namespace Access Log Page: Not Supported 00:16:08.682 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:08.682 Command Effects Log Page: Supported 00:16:08.682 Get Log Page Extended Data: Supported 00:16:08.682 Telemetry Log Pages: Not Supported 00:16:08.682 Persistent Event Log Pages: Not Supported 00:16:08.682 Supported Log Pages Log Page: May Support 00:16:08.682 Commands Supported & Effects Log Page: Not Supported 00:16:08.682 Feature Identifiers & Effects Log Page:May Support 00:16:08.682 NVMe-MI Commands & Effects Log Page: May Support 00:16:08.682 Data Area 4 for Telemetry Log: Not Supported 00:16:08.682 Error Log Page Entries Supported: 128 00:16:08.682 Keep Alive: Supported 00:16:08.682 Keep Alive Granularity: 10000 ms 00:16:08.682 00:16:08.682 NVM Command Set Attributes 00:16:08.682 ========================== 00:16:08.682 Submission Queue Entry Size 00:16:08.682 Max: 64 00:16:08.682 Min: 64 00:16:08.682 Completion Queue Entry Size 00:16:08.682 Max: 16 00:16:08.682 Min: 16 00:16:08.682 Number of Namespaces: 32 00:16:08.682 Compare Command: Supported 00:16:08.682 Write Uncorrectable Command: Not Supported 00:16:08.682 Dataset Management Command: Supported 00:16:08.682 Write Zeroes Command: Supported 00:16:08.682 Set Features Save Field: Not Supported 00:16:08.682 Reservations: Not Supported 00:16:08.682 Timestamp: Not Supported 00:16:08.682 Copy: Supported 00:16:08.682 Volatile Write Cache: Present 00:16:08.682 Atomic Write Unit (Normal): 1 00:16:08.682 Atomic Write Unit (PFail): 1 00:16:08.682 Atomic Compare & Write Unit: 1 00:16:08.682 Fused Compare & Write: Supported 00:16:08.682 Scatter-Gather List 00:16:08.682 SGL Command Set: Supported (Dword aligned) 00:16:08.682 SGL Keyed: Not Supported 00:16:08.682 SGL Bit Bucket Descriptor: Not Supported 00:16:08.682 SGL Metadata Pointer: Not Supported 00:16:08.682 Oversized SGL: Not Supported 00:16:08.682 SGL Metadata Address: Not Supported 00:16:08.682 SGL Offset: Not Supported 00:16:08.682 Transport SGL Data Block: Not Supported 00:16:08.682 Replay Protected Memory Block: Not Supported 00:16:08.682 00:16:08.682 Firmware Slot Information 00:16:08.682 ========================= 00:16:08.682 Active slot: 1 00:16:08.682 Slot 1 Firmware Revision: 24.05.1 00:16:08.682 00:16:08.682 00:16:08.682 Commands Supported and Effects 00:16:08.682 ============================== 00:16:08.682 Admin Commands 00:16:08.682 -------------- 00:16:08.682 Get Log Page (02h): Supported 00:16:08.682 Identify (06h): Supported 00:16:08.682 Abort (08h): Supported 00:16:08.682 Set Features (09h): Supported 00:16:08.682 Get Features (0Ah): Supported 00:16:08.682 Asynchronous Event Request (0Ch): Supported 00:16:08.682 Keep Alive (18h): Supported 00:16:08.682 I/O Commands 00:16:08.682 ------------ 00:16:08.682 Flush (00h): Supported LBA-Change 00:16:08.682 Write (01h): Supported LBA-Change 00:16:08.682 Read (02h): Supported 00:16:08.682 Compare (05h): Supported 00:16:08.682 Write Zeroes (08h): Supported LBA-Change 00:16:08.682 Dataset Management (09h): Supported LBA-Change 00:16:08.682 Copy (19h): Supported LBA-Change 00:16:08.682 Unknown (79h): Supported LBA-Change 00:16:08.682 Unknown (7Ah): Supported 00:16:08.682 00:16:08.682 Error Log 00:16:08.682 ========= 00:16:08.682 00:16:08.682 Arbitration 00:16:08.682 =========== 00:16:08.682 Arbitration Burst: 1 00:16:08.682 00:16:08.682 Power Management 00:16:08.682 ================ 00:16:08.682 Number of Power States: 1 00:16:08.682 Current Power State: Power State #0 00:16:08.682 Power State #0: 00:16:08.682 Max Power: 0.00 W 00:16:08.682 Non-Operational State: Operational 00:16:08.682 Entry Latency: Not Reported 00:16:08.682 Exit Latency: Not Reported 00:16:08.682 Relative Read Throughput: 0 00:16:08.682 Relative Read Latency: 0 00:16:08.682 Relative Write Throughput: 0 00:16:08.682 Relative Write Latency: 0 00:16:08.682 Idle Power: Not Reported 00:16:08.682 Active Power: Not Reported 00:16:08.682 Non-Operational Permissive Mode: Not Supported 00:16:08.682 00:16:08.682 Health Information 00:16:08.682 ================== 00:16:08.682 Critical Warnings: 00:16:08.682 Available Spare Space: OK 00:16:08.682 Temperature: OK 00:16:08.682 Device Reliability: OK 00:16:08.682 Read Only: No 00:16:08.682 Volatile Memory Backup: OK 00:16:08.682 Current Temperature: 0 Kelvin[2024-07-12 01:34:35.008607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:08.682 [2024-07-12 01:34:35.008616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:08.682 [2024-07-12 01:34:35.008641] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:08.682 [2024-07-12 01:34:35.008649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.682 [2024-07-12 01:34:35.008656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.682 [2024-07-12 01:34:35.008662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.682 [2024-07-12 01:34:35.008668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.682 [2024-07-12 01:34:35.010275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:08.682 [2024-07-12 01:34:35.010288] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:08.682 [2024-07-12 01:34:35.010773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:08.682 [2024-07-12 01:34:35.010813] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:08.682 [2024-07-12 01:34:35.010819] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:08.682 [2024-07-12 01:34:35.011776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:08.683 [2024-07-12 01:34:35.011787] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:08.683 [2024-07-12 01:34:35.011845] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:08.683 [2024-07-12 01:34:35.016238] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:08.944 (-273 Celsius) 00:16:08.944 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:08.944 Available Spare: 0% 00:16:08.944 Available Spare Threshold: 0% 00:16:08.944 Life Percentage Used: 0% 00:16:08.944 Data Units Read: 0 00:16:08.944 Data Units Written: 0 00:16:08.944 Host Read Commands: 0 00:16:08.944 Host Write Commands: 0 00:16:08.944 Controller Busy Time: 0 minutes 00:16:08.944 Power Cycles: 0 00:16:08.944 Power On Hours: 0 hours 00:16:08.944 Unsafe Shutdowns: 0 00:16:08.944 Unrecoverable Media Errors: 0 00:16:08.944 Lifetime Error Log Entries: 0 00:16:08.944 Warning Temperature Time: 0 minutes 00:16:08.944 Critical Temperature Time: 0 minutes 00:16:08.944 00:16:08.944 Number of Queues 00:16:08.944 ================ 00:16:08.944 Number of I/O Submission Queues: 127 00:16:08.944 Number of I/O Completion Queues: 127 00:16:08.944 00:16:08.944 Active Namespaces 00:16:08.944 ================= 00:16:08.944 Namespace ID:1 00:16:08.944 Error Recovery Timeout: Unlimited 00:16:08.944 Command Set Identifier: NVM (00h) 00:16:08.945 Deallocate: Supported 00:16:08.945 Deallocated/Unwritten Error: Not Supported 00:16:08.945 Deallocated Read Value: Unknown 00:16:08.945 Deallocate in Write Zeroes: Not Supported 00:16:08.945 Deallocated Guard Field: 0xFFFF 00:16:08.945 Flush: Supported 00:16:08.945 Reservation: Supported 00:16:08.945 Namespace Sharing Capabilities: Multiple Controllers 00:16:08.945 Size (in LBAs): 131072 (0GiB) 00:16:08.945 Capacity (in LBAs): 131072 (0GiB) 00:16:08.945 Utilization (in LBAs): 131072 (0GiB) 00:16:08.945 NGUID: 5C3DD33E40874112886641ED5FCD6E8F 00:16:08.945 UUID: 5c3dd33e-4087-4112-8866-41ed5fcd6e8f 00:16:08.945 Thin Provisioning: Not Supported 00:16:08.945 Per-NS Atomic Units: Yes 00:16:08.945 Atomic Boundary Size (Normal): 0 00:16:08.945 Atomic Boundary Size (PFail): 0 00:16:08.945 Atomic Boundary Offset: 0 00:16:08.945 Maximum Single Source Range Length: 65535 00:16:08.945 Maximum Copy Length: 65535 00:16:08.945 Maximum Source Range Count: 1 00:16:08.945 NGUID/EUI64 Never Reused: No 00:16:08.945 Namespace Write Protected: No 00:16:08.945 Number of LBA Formats: 1 00:16:08.945 Current LBA Format: LBA Format #00 00:16:08.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:08.945 00:16:08.945 01:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:08.945 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.945 [2024-07-12 01:34:35.201877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:14.237 Initializing NVMe Controllers 00:16:14.237 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:14.237 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:14.237 Initialization complete. Launching workers. 00:16:14.237 ======================================================== 00:16:14.237 Latency(us) 00:16:14.237 Device Information : IOPS MiB/s Average min max 00:16:14.237 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40014.20 156.31 3199.39 831.41 6833.51 00:16:14.237 ======================================================== 00:16:14.237 Total : 40014.20 156.31 3199.39 831.41 6833.51 00:16:14.237 00:16:14.237 [2024-07-12 01:34:40.222540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:14.237 01:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:14.237 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.237 [2024-07-12 01:34:40.398390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.529 Initializing NVMe Controllers 00:16:19.529 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:19.529 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:19.529 Initialization complete. Launching workers. 00:16:19.529 ======================================================== 00:16:19.529 Latency(us) 00:16:19.529 Device Information : IOPS MiB/s Average min max 00:16:19.529 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 6989.57 8973.32 00:16:19.529 ======================================================== 00:16:19.529 Total : 16051.20 62.70 7980.74 6989.57 8973.32 00:16:19.529 00:16:19.529 [2024-07-12 01:34:45.434664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.529 01:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:19.529 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.529 [2024-07-12 01:34:45.625542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.818 [2024-07-12 01:34:50.696439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.818 Initializing NVMe Controllers 00:16:24.818 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.818 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:24.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:24.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:24.818 Initialization complete. Launching workers. 00:16:24.818 Starting thread on core 2 00:16:24.818 Starting thread on core 3 00:16:24.818 Starting thread on core 1 00:16:24.818 01:34:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:24.818 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.818 [2024-07-12 01:34:50.965602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:28.116 [2024-07-12 01:34:54.018961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:28.116 Initializing NVMe Controllers 00:16:28.116 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:28.116 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:28.116 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:28.116 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:28.116 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:28.116 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:28.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:28.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:28.116 Initialization complete. Launching workers. 00:16:28.116 Starting thread on core 1 with urgent priority queue 00:16:28.116 Starting thread on core 2 with urgent priority queue 00:16:28.116 Starting thread on core 3 with urgent priority queue 00:16:28.116 Starting thread on core 0 with urgent priority queue 00:16:28.116 SPDK bdev Controller (SPDK1 ) core 0: 8607.33 IO/s 11.62 secs/100000 ios 00:16:28.116 SPDK bdev Controller (SPDK1 ) core 1: 11989.67 IO/s 8.34 secs/100000 ios 00:16:28.116 SPDK bdev Controller (SPDK1 ) core 2: 8018.67 IO/s 12.47 secs/100000 ios 00:16:28.116 SPDK bdev Controller (SPDK1 ) core 3: 12654.67 IO/s 7.90 secs/100000 ios 00:16:28.116 ======================================================== 00:16:28.116 00:16:28.116 01:34:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:28.116 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.116 [2024-07-12 01:34:54.290714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:28.116 Initializing NVMe Controllers 00:16:28.116 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:28.116 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:28.116 Namespace ID: 1 size: 0GB 00:16:28.116 Initialization complete. 00:16:28.116 INFO: using host memory buffer for IO 00:16:28.116 Hello world! 00:16:28.116 [2024-07-12 01:34:54.324894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:28.116 01:34:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:28.116 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.377 [2024-07-12 01:34:54.593684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.321 Initializing NVMe Controllers 00:16:29.321 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.321 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.321 Initialization complete. Launching workers. 00:16:29.321 submit (in ns) avg, min, max = 7755.6, 3905.8, 4004362.5 00:16:29.321 complete (in ns) avg, min, max = 20982.4, 2395.0, 6989822.5 00:16:29.321 00:16:29.322 Submit histogram 00:16:29.322 ================ 00:16:29.322 Range in us Cumulative Count 00:16:29.322 3.893 - 3.920: 0.0356% ( 7) 00:16:29.322 3.920 - 3.947: 0.4527% ( 82) 00:16:29.322 3.947 - 3.973: 2.2430% ( 352) 00:16:29.322 3.973 - 4.000: 8.5194% ( 1234) 00:16:29.322 4.000 - 4.027: 18.0255% ( 1869) 00:16:29.322 4.027 - 4.053: 30.4003% ( 2433) 00:16:29.322 4.053 - 4.080: 42.4648% ( 2372) 00:16:29.322 4.080 - 4.107: 56.3298% ( 2726) 00:16:29.322 4.107 - 4.133: 74.1366% ( 3501) 00:16:29.322 4.133 - 4.160: 87.3964% ( 2607) 00:16:29.322 4.160 - 4.187: 94.8426% ( 1464) 00:16:29.322 4.187 - 4.213: 98.1741% ( 655) 00:16:29.322 4.213 - 4.240: 99.1913% ( 200) 00:16:29.322 4.240 - 4.267: 99.4863% ( 58) 00:16:29.322 4.267 - 4.293: 99.5117% ( 5) 00:16:29.322 4.293 - 4.320: 99.5321% ( 4) 00:16:29.322 4.400 - 4.427: 99.5422% ( 2) 00:16:29.322 4.667 - 4.693: 99.5473% ( 1) 00:16:29.322 5.093 - 5.120: 99.5524% ( 1) 00:16:29.322 5.413 - 5.440: 99.5575% ( 1) 00:16:29.322 5.440 - 5.467: 99.5626% ( 1) 00:16:29.322 5.547 - 5.573: 99.5728% ( 2) 00:16:29.322 5.600 - 5.627: 99.5829% ( 2) 00:16:29.322 5.787 - 5.813: 99.5880% ( 1) 00:16:29.322 5.867 - 5.893: 99.5931% ( 1) 00:16:29.322 6.027 - 6.053: 99.6033% ( 2) 00:16:29.322 6.053 - 6.080: 99.6084% ( 1) 00:16:29.322 6.080 - 6.107: 99.6134% ( 1) 00:16:29.322 6.133 - 6.160: 99.6236% ( 2) 00:16:29.322 6.160 - 6.187: 99.6389% ( 3) 00:16:29.322 6.213 - 6.240: 99.6541% ( 3) 00:16:29.322 6.293 - 6.320: 99.6592% ( 1) 00:16:29.322 6.320 - 6.347: 99.6643% ( 1) 00:16:29.322 6.373 - 6.400: 99.6694% ( 1) 00:16:29.322 6.427 - 6.453: 99.6745% ( 1) 00:16:29.322 6.453 - 6.480: 99.6897% ( 3) 00:16:29.322 6.480 - 6.507: 99.6999% ( 2) 00:16:29.322 6.507 - 6.533: 99.7050% ( 1) 00:16:29.322 6.560 - 6.587: 99.7101% ( 1) 00:16:29.322 6.587 - 6.613: 99.7152% ( 1) 00:16:29.322 6.613 - 6.640: 99.7253% ( 2) 00:16:29.322 6.667 - 6.693: 99.7355% ( 2) 00:16:29.322 6.693 - 6.720: 99.7406% ( 1) 00:16:29.322 6.720 - 6.747: 99.7508% ( 2) 00:16:29.322 6.747 - 6.773: 99.7660% ( 3) 00:16:29.322 6.773 - 6.800: 99.7813% ( 3) 00:16:29.322 6.800 - 6.827: 99.7915% ( 2) 00:16:29.322 6.827 - 6.880: 99.8016% ( 2) 00:16:29.322 6.880 - 6.933: 99.8169% ( 3) 00:16:29.322 6.933 - 6.987: 99.8220% ( 1) 00:16:29.322 6.987 - 7.040: 99.8372% ( 3) 00:16:29.322 7.040 - 7.093: 99.8423% ( 1) 00:16:29.322 7.093 - 7.147: 99.8525% ( 2) 00:16:29.322 7.147 - 7.200: 99.8576% ( 1) 00:16:29.322 7.200 - 7.253: 99.8627% ( 1) 00:16:29.322 7.253 - 7.307: 99.8678% ( 1) 00:16:29.322 7.307 - 7.360: 99.8779% ( 2) 00:16:29.322 7.467 - 7.520: 99.8881% ( 2) 00:16:29.322 8.320 - 8.373: 99.8932% ( 1) 00:16:29.322 8.373 - 8.427: 99.8983% ( 1) 00:16:29.322 10.187 - 10.240: 99.9034% ( 1) 00:16:29.322 11.147 - 11.200: 99.9084% ( 1) 00:16:29.322 3986.773 - 4014.080: 100.0000% ( 18) 00:16:29.322 00:16:29.322 Complete histogram 00:16:29.322 ================== 00:16:29.322 Range in us Cumulative Count 00:16:29.322 2.387 - 2.400: 0.0051% ( 1) 00:16:29.322 2.400 - 2.413: 0.4171% ( 81) 00:16:29.322 2.413 - 2.427: 0.9562% ( 106) 00:16:29.322 2.427 - 2.440: 1.0376% ( 16) 00:16:29.322 2.440 - 2.453: 1.1393% ( 20) 00:16:29.322 2.453 - 2.467: 1.2105% ( 14) 00:16:29.322 2.467 - 2.480: 41.5645% ( 7934) 00:16:29.322 2.480 - 2.493: 66.1258% ( 4829) 00:16:29.322 2.493 - 2.507: 77.3460% ( 2206) 00:16:29.322 2.507 - 2.520: 82.8290% ( 1078) 00:16:29.322 2.520 - 2.533: 84.4464% ( 318) 00:16:29.322 2.533 - 2.547: 86.8827% ( 479) 00:16:29.322 2.547 - 2.560: 92.2334% ( 1052) 00:16:29.322 2.560 - 2.573: 96.1294% ( 766) 00:16:29.322 2.573 - [2024-07-12 01:34:55.614207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.322 2.587: 97.8892% ( 346) 00:16:29.322 2.587 - 2.600: 98.7895% ( 177) 00:16:29.322 2.600 - 2.613: 99.1404% ( 69) 00:16:29.322 2.613 - 2.627: 99.2472% ( 21) 00:16:29.322 2.627 - 2.640: 99.2625% ( 3) 00:16:29.322 2.640 - 2.653: 99.2879% ( 5) 00:16:29.322 4.453 - 4.480: 99.2930% ( 1) 00:16:29.322 4.480 - 4.507: 99.2981% ( 1) 00:16:29.322 4.507 - 4.533: 99.3032% ( 1) 00:16:29.322 4.587 - 4.613: 99.3184% ( 3) 00:16:29.322 4.613 - 4.640: 99.3337% ( 3) 00:16:29.322 4.640 - 4.667: 99.3388% ( 1) 00:16:29.322 4.667 - 4.693: 99.3439% ( 1) 00:16:29.322 4.693 - 4.720: 99.3541% ( 2) 00:16:29.322 4.720 - 4.747: 99.3591% ( 1) 00:16:29.322 4.773 - 4.800: 99.3693% ( 2) 00:16:29.322 4.800 - 4.827: 99.3897% ( 4) 00:16:29.322 4.827 - 4.853: 99.3998% ( 2) 00:16:29.322 4.853 - 4.880: 99.4049% ( 1) 00:16:29.322 4.933 - 4.960: 99.4100% ( 1) 00:16:29.322 5.013 - 5.040: 99.4202% ( 2) 00:16:29.322 5.067 - 5.093: 99.4253% ( 1) 00:16:29.322 5.093 - 5.120: 99.4303% ( 1) 00:16:29.322 5.173 - 5.200: 99.4405% ( 2) 00:16:29.322 5.200 - 5.227: 99.4456% ( 1) 00:16:29.322 5.227 - 5.253: 99.4558% ( 2) 00:16:29.322 5.307 - 5.333: 99.4609% ( 1) 00:16:29.322 5.387 - 5.413: 99.4659% ( 1) 00:16:29.322 5.467 - 5.493: 99.4710% ( 1) 00:16:29.322 5.600 - 5.627: 99.4761% ( 1) 00:16:29.322 5.680 - 5.707: 99.4812% ( 1) 00:16:29.322 5.973 - 6.000: 99.4863% ( 1) 00:16:29.322 6.000 - 6.027: 99.4914% ( 1) 00:16:29.322 6.320 - 6.347: 99.4965% ( 1) 00:16:29.322 6.347 - 6.373: 99.5016% ( 1) 00:16:29.322 6.373 - 6.400: 99.5066% ( 1) 00:16:29.322 7.147 - 7.200: 99.5117% ( 1) 00:16:29.322 8.213 - 8.267: 99.5168% ( 1) 00:16:29.322 11.253 - 11.307: 99.5219% ( 1) 00:16:29.322 11.573 - 11.627: 99.5270% ( 1) 00:16:29.322 12.160 - 12.213: 99.5321% ( 1) 00:16:29.322 45.013 - 45.227: 99.5372% ( 1) 00:16:29.322 1010.347 - 1017.173: 99.5422% ( 1) 00:16:29.322 3031.040 - 3044.693: 99.5473% ( 1) 00:16:29.322 3713.707 - 3741.013: 99.5524% ( 1) 00:16:29.322 3986.773 - 4014.080: 99.9898% ( 86) 00:16:29.322 4969.813 - 4997.120: 99.9949% ( 1) 00:16:29.322 6963.200 - 6990.507: 100.0000% ( 1) 00:16:29.322 00:16:29.322 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:29.322 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:29.322 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:29.322 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:29.322 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.584 [ 00:16:29.584 { 00:16:29.584 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:29.584 "subtype": "Discovery", 00:16:29.584 "listen_addresses": [], 00:16:29.584 "allow_any_host": true, 00:16:29.584 "hosts": [] 00:16:29.584 }, 00:16:29.584 { 00:16:29.584 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:29.584 "subtype": "NVMe", 00:16:29.584 "listen_addresses": [ 00:16:29.584 { 00:16:29.584 "trtype": "VFIOUSER", 00:16:29.584 "adrfam": "IPv4", 00:16:29.584 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:29.584 "trsvcid": "0" 00:16:29.584 } 00:16:29.584 ], 00:16:29.584 "allow_any_host": true, 00:16:29.584 "hosts": [], 00:16:29.584 "serial_number": "SPDK1", 00:16:29.584 "model_number": "SPDK bdev Controller", 00:16:29.584 "max_namespaces": 32, 00:16:29.584 "min_cntlid": 1, 00:16:29.584 "max_cntlid": 65519, 00:16:29.584 "namespaces": [ 00:16:29.584 { 00:16:29.584 "nsid": 1, 00:16:29.584 "bdev_name": "Malloc1", 00:16:29.584 "name": "Malloc1", 00:16:29.584 "nguid": "5C3DD33E40874112886641ED5FCD6E8F", 00:16:29.584 "uuid": "5c3dd33e-4087-4112-8866-41ed5fcd6e8f" 00:16:29.584 } 00:16:29.584 ] 00:16:29.584 }, 00:16:29.584 { 00:16:29.584 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:29.584 "subtype": "NVMe", 00:16:29.584 "listen_addresses": [ 00:16:29.584 { 00:16:29.584 "trtype": "VFIOUSER", 00:16:29.584 "adrfam": "IPv4", 00:16:29.584 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:29.584 "trsvcid": "0" 00:16:29.584 } 00:16:29.584 ], 00:16:29.584 "allow_any_host": true, 00:16:29.584 "hosts": [], 00:16:29.584 "serial_number": "SPDK2", 00:16:29.584 "model_number": "SPDK bdev Controller", 00:16:29.584 "max_namespaces": 32, 00:16:29.584 "min_cntlid": 1, 00:16:29.584 "max_cntlid": 65519, 00:16:29.584 "namespaces": [ 00:16:29.584 { 00:16:29.584 "nsid": 1, 00:16:29.584 "bdev_name": "Malloc2", 00:16:29.584 "name": "Malloc2", 00:16:29.584 "nguid": "F8AD1215809140D0AB87F37E9DD1A55C", 00:16:29.584 "uuid": "f8ad1215-8091-40d0-ab87-f37e9dd1a55c" 00:16:29.584 } 00:16:29.584 ] 00:16:29.584 } 00:16:29.584 ] 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3912626 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:29.584 01:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:29.584 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.846 Malloc3 00:16:29.846 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:29.846 [2024-07-12 01:34:56.008291] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.846 [2024-07-12 01:34:56.147244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.846 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.846 Asynchronous Event Request test 00:16:29.846 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.846 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.846 Registering asynchronous event callbacks... 00:16:29.846 Starting namespace attribute notice tests for all controllers... 00:16:29.846 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:29.846 aer_cb - Changed Namespace 00:16:29.846 Cleaning up... 00:16:30.107 [ 00:16:30.107 { 00:16:30.107 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:30.107 "subtype": "Discovery", 00:16:30.107 "listen_addresses": [], 00:16:30.107 "allow_any_host": true, 00:16:30.107 "hosts": [] 00:16:30.107 }, 00:16:30.108 { 00:16:30.108 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:30.108 "subtype": "NVMe", 00:16:30.108 "listen_addresses": [ 00:16:30.108 { 00:16:30.108 "trtype": "VFIOUSER", 00:16:30.108 "adrfam": "IPv4", 00:16:30.108 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:30.108 "trsvcid": "0" 00:16:30.108 } 00:16:30.108 ], 00:16:30.108 "allow_any_host": true, 00:16:30.108 "hosts": [], 00:16:30.108 "serial_number": "SPDK1", 00:16:30.108 "model_number": "SPDK bdev Controller", 00:16:30.108 "max_namespaces": 32, 00:16:30.108 "min_cntlid": 1, 00:16:30.108 "max_cntlid": 65519, 00:16:30.108 "namespaces": [ 00:16:30.108 { 00:16:30.108 "nsid": 1, 00:16:30.108 "bdev_name": "Malloc1", 00:16:30.108 "name": "Malloc1", 00:16:30.108 "nguid": "5C3DD33E40874112886641ED5FCD6E8F", 00:16:30.108 "uuid": "5c3dd33e-4087-4112-8866-41ed5fcd6e8f" 00:16:30.108 }, 00:16:30.108 { 00:16:30.108 "nsid": 2, 00:16:30.108 "bdev_name": "Malloc3", 00:16:30.108 "name": "Malloc3", 00:16:30.108 "nguid": "69768B7445D24774A1CB8A4D9873F91E", 00:16:30.108 "uuid": "69768b74-45d2-4774-a1cb-8a4d9873f91e" 00:16:30.108 } 00:16:30.108 ] 00:16:30.108 }, 00:16:30.108 { 00:16:30.108 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:30.108 "subtype": "NVMe", 00:16:30.108 "listen_addresses": [ 00:16:30.108 { 00:16:30.108 "trtype": "VFIOUSER", 00:16:30.108 "adrfam": "IPv4", 00:16:30.108 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:30.108 "trsvcid": "0" 00:16:30.108 } 00:16:30.108 ], 00:16:30.108 "allow_any_host": true, 00:16:30.108 "hosts": [], 00:16:30.108 "serial_number": "SPDK2", 00:16:30.108 "model_number": "SPDK bdev Controller", 00:16:30.108 "max_namespaces": 32, 00:16:30.108 "min_cntlid": 1, 00:16:30.108 "max_cntlid": 65519, 00:16:30.108 "namespaces": [ 00:16:30.108 { 00:16:30.108 "nsid": 1, 00:16:30.108 "bdev_name": "Malloc2", 00:16:30.108 "name": "Malloc2", 00:16:30.108 "nguid": "F8AD1215809140D0AB87F37E9DD1A55C", 00:16:30.108 "uuid": "f8ad1215-8091-40d0-ab87-f37e9dd1a55c" 00:16:30.108 } 00:16:30.108 ] 00:16:30.108 } 00:16:30.108 ] 00:16:30.108 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3912626 00:16:30.108 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:30.108 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:30.108 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:30.108 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:30.108 [2024-07-12 01:34:56.369512] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:30.108 [2024-07-12 01:34:56.369554] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912880 ] 00:16:30.108 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.108 [2024-07-12 01:34:56.402760] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:30.108 [2024-07-12 01:34:56.408002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:30.108 [2024-07-12 01:34:56.408023] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9df99d7000 00:16:30.108 [2024-07-12 01:34:56.409002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.410001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.411003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.412008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.413013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.414021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.415031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.416035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.108 [2024-07-12 01:34:56.417046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:30.108 [2024-07-12 01:34:56.417057] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9df879b000 00:16:30.108 [2024-07-12 01:34:56.418385] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:30.108 [2024-07-12 01:34:56.434582] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:30.108 [2024-07-12 01:34:56.434605] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:30.108 [2024-07-12 01:34:56.439679] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:30.108 [2024-07-12 01:34:56.439728] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:30.108 [2024-07-12 01:34:56.439808] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:30.108 [2024-07-12 01:34:56.439821] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:30.108 [2024-07-12 01:34:56.439826] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:30.108 [2024-07-12 01:34:56.440681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:30.108 [2024-07-12 01:34:56.440692] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:30.108 [2024-07-12 01:34:56.440699] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:30.108 [2024-07-12 01:34:56.441684] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:30.108 [2024-07-12 01:34:56.441693] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:30.108 [2024-07-12 01:34:56.441700] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:30.108 [2024-07-12 01:34:56.442693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:30.108 [2024-07-12 01:34:56.442702] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:30.108 [2024-07-12 01:34:56.443698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:30.108 [2024-07-12 01:34:56.443707] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:30.108 [2024-07-12 01:34:56.443711] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:30.108 [2024-07-12 01:34:56.443718] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:30.108 [2024-07-12 01:34:56.443823] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:30.108 [2024-07-12 01:34:56.443828] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:30.108 [2024-07-12 01:34:56.443833] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:30.108 [2024-07-12 01:34:56.444707] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:30.108 [2024-07-12 01:34:56.445715] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:30.108 [2024-07-12 01:34:56.446723] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:30.108 [2024-07-12 01:34:56.447724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:30.108 [2024-07-12 01:34:56.447761] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:30.108 [2024-07-12 01:34:56.448732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:30.108 [2024-07-12 01:34:56.448745] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:30.108 [2024-07-12 01:34:56.448749] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:30.108 [2024-07-12 01:34:56.448770] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:30.108 [2024-07-12 01:34:56.448777] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:30.108 [2024-07-12 01:34:56.448792] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:30.108 [2024-07-12 01:34:56.448797] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.108 [2024-07-12 01:34:56.448808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.108 [2024-07-12 01:34:56.455236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:30.108 [2024-07-12 01:34:56.455249] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:30.108 [2024-07-12 01:34:56.455254] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:30.108 [2024-07-12 01:34:56.455258] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:30.108 [2024-07-12 01:34:56.455263] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:30.108 [2024-07-12 01:34:56.455267] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:30.108 [2024-07-12 01:34:56.455272] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:30.109 [2024-07-12 01:34:56.455276] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:30.109 [2024-07-12 01:34:56.455284] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:30.109 [2024-07-12 01:34:56.455294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:30.109 [2024-07-12 01:34:56.463238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:30.109 [2024-07-12 01:34:56.463250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.109 [2024-07-12 01:34:56.463259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.109 [2024-07-12 01:34:56.463267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.109 [2024-07-12 01:34:56.463275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.109 [2024-07-12 01:34:56.463291] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:30.109 [2024-07-12 01:34:56.463301] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:30.109 [2024-07-12 01:34:56.463310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.471235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.471245] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:30.371 [2024-07-12 01:34:56.471250] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.471256] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.471263] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.471273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.479244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.479308] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.479316] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.479323] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:30.371 [2024-07-12 01:34:56.479327] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:30.371 [2024-07-12 01:34:56.479334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.486235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.486245] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:30.371 [2024-07-12 01:34:56.486254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.486261] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.486268] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:30.371 [2024-07-12 01:34:56.486272] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.371 [2024-07-12 01:34:56.486278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.494234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.494247] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.494255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.494262] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:30.371 [2024-07-12 01:34:56.494266] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.371 [2024-07-12 01:34:56.494272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.502234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.502246] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.502253] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.502261] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.502266] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.502271] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.502276] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:30.371 [2024-07-12 01:34:56.502280] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:30.371 [2024-07-12 01:34:56.502285] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:30.371 [2024-07-12 01:34:56.502305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.510235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.510249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.518234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.518247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.526237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.526250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.534237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.534250] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:30.371 [2024-07-12 01:34:56.534256] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:30.371 [2024-07-12 01:34:56.534259] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:30.371 [2024-07-12 01:34:56.534262] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:30.371 [2024-07-12 01:34:56.534269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:30.371 [2024-07-12 01:34:56.534276] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:30.371 [2024-07-12 01:34:56.534280] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:30.371 [2024-07-12 01:34:56.534286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.534293] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:30.371 [2024-07-12 01:34:56.534297] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.371 [2024-07-12 01:34:56.534305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.534313] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:30.371 [2024-07-12 01:34:56.534317] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:30.371 [2024-07-12 01:34:56.534323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:30.371 [2024-07-12 01:34:56.542235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.542250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.542259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:30.371 [2024-07-12 01:34:56.542267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:30.371 ===================================================== 00:16:30.371 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:30.371 ===================================================== 00:16:30.371 Controller Capabilities/Features 00:16:30.371 ================================ 00:16:30.371 Vendor ID: 4e58 00:16:30.371 Subsystem Vendor ID: 4e58 00:16:30.371 Serial Number: SPDK2 00:16:30.371 Model Number: SPDK bdev Controller 00:16:30.371 Firmware Version: 24.05.1 00:16:30.371 Recommended Arb Burst: 6 00:16:30.371 IEEE OUI Identifier: 8d 6b 50 00:16:30.371 Multi-path I/O 00:16:30.371 May have multiple subsystem ports: Yes 00:16:30.371 May have multiple controllers: Yes 00:16:30.371 Associated with SR-IOV VF: No 00:16:30.371 Max Data Transfer Size: 131072 00:16:30.371 Max Number of Namespaces: 32 00:16:30.371 Max Number of I/O Queues: 127 00:16:30.371 NVMe Specification Version (VS): 1.3 00:16:30.371 NVMe Specification Version (Identify): 1.3 00:16:30.371 Maximum Queue Entries: 256 00:16:30.372 Contiguous Queues Required: Yes 00:16:30.372 Arbitration Mechanisms Supported 00:16:30.372 Weighted Round Robin: Not Supported 00:16:30.372 Vendor Specific: Not Supported 00:16:30.372 Reset Timeout: 15000 ms 00:16:30.372 Doorbell Stride: 4 bytes 00:16:30.372 NVM Subsystem Reset: Not Supported 00:16:30.372 Command Sets Supported 00:16:30.372 NVM Command Set: Supported 00:16:30.372 Boot Partition: Not Supported 00:16:30.372 Memory Page Size Minimum: 4096 bytes 00:16:30.372 Memory Page Size Maximum: 4096 bytes 00:16:30.372 Persistent Memory Region: Not Supported 00:16:30.372 Optional Asynchronous Events Supported 00:16:30.372 Namespace Attribute Notices: Supported 00:16:30.372 Firmware Activation Notices: Not Supported 00:16:30.372 ANA Change Notices: Not Supported 00:16:30.372 PLE Aggregate Log Change Notices: Not Supported 00:16:30.372 LBA Status Info Alert Notices: Not Supported 00:16:30.372 EGE Aggregate Log Change Notices: Not Supported 00:16:30.372 Normal NVM Subsystem Shutdown event: Not Supported 00:16:30.372 Zone Descriptor Change Notices: Not Supported 00:16:30.372 Discovery Log Change Notices: Not Supported 00:16:30.372 Controller Attributes 00:16:30.372 128-bit Host Identifier: Supported 00:16:30.372 Non-Operational Permissive Mode: Not Supported 00:16:30.372 NVM Sets: Not Supported 00:16:30.372 Read Recovery Levels: Not Supported 00:16:30.372 Endurance Groups: Not Supported 00:16:30.372 Predictable Latency Mode: Not Supported 00:16:30.372 Traffic Based Keep ALive: Not Supported 00:16:30.372 Namespace Granularity: Not Supported 00:16:30.372 SQ Associations: Not Supported 00:16:30.372 UUID List: Not Supported 00:16:30.372 Multi-Domain Subsystem: Not Supported 00:16:30.372 Fixed Capacity Management: Not Supported 00:16:30.372 Variable Capacity Management: Not Supported 00:16:30.372 Delete Endurance Group: Not Supported 00:16:30.372 Delete NVM Set: Not Supported 00:16:30.372 Extended LBA Formats Supported: Not Supported 00:16:30.372 Flexible Data Placement Supported: Not Supported 00:16:30.372 00:16:30.372 Controller Memory Buffer Support 00:16:30.372 ================================ 00:16:30.372 Supported: No 00:16:30.372 00:16:30.372 Persistent Memory Region Support 00:16:30.372 ================================ 00:16:30.372 Supported: No 00:16:30.372 00:16:30.372 Admin Command Set Attributes 00:16:30.372 ============================ 00:16:30.372 Security Send/Receive: Not Supported 00:16:30.372 Format NVM: Not Supported 00:16:30.372 Firmware Activate/Download: Not Supported 00:16:30.372 Namespace Management: Not Supported 00:16:30.372 Device Self-Test: Not Supported 00:16:30.372 Directives: Not Supported 00:16:30.372 NVMe-MI: Not Supported 00:16:30.372 Virtualization Management: Not Supported 00:16:30.372 Doorbell Buffer Config: Not Supported 00:16:30.372 Get LBA Status Capability: Not Supported 00:16:30.372 Command & Feature Lockdown Capability: Not Supported 00:16:30.372 Abort Command Limit: 4 00:16:30.372 Async Event Request Limit: 4 00:16:30.372 Number of Firmware Slots: N/A 00:16:30.372 Firmware Slot 1 Read-Only: N/A 00:16:30.372 Firmware Activation Without Reset: N/A 00:16:30.372 Multiple Update Detection Support: N/A 00:16:30.372 Firmware Update Granularity: No Information Provided 00:16:30.372 Per-Namespace SMART Log: No 00:16:30.372 Asymmetric Namespace Access Log Page: Not Supported 00:16:30.372 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:30.372 Command Effects Log Page: Supported 00:16:30.372 Get Log Page Extended Data: Supported 00:16:30.372 Telemetry Log Pages: Not Supported 00:16:30.372 Persistent Event Log Pages: Not Supported 00:16:30.372 Supported Log Pages Log Page: May Support 00:16:30.372 Commands Supported & Effects Log Page: Not Supported 00:16:30.372 Feature Identifiers & Effects Log Page:May Support 00:16:30.372 NVMe-MI Commands & Effects Log Page: May Support 00:16:30.372 Data Area 4 for Telemetry Log: Not Supported 00:16:30.372 Error Log Page Entries Supported: 128 00:16:30.372 Keep Alive: Supported 00:16:30.372 Keep Alive Granularity: 10000 ms 00:16:30.372 00:16:30.372 NVM Command Set Attributes 00:16:30.372 ========================== 00:16:30.372 Submission Queue Entry Size 00:16:30.372 Max: 64 00:16:30.372 Min: 64 00:16:30.372 Completion Queue Entry Size 00:16:30.372 Max: 16 00:16:30.372 Min: 16 00:16:30.372 Number of Namespaces: 32 00:16:30.372 Compare Command: Supported 00:16:30.372 Write Uncorrectable Command: Not Supported 00:16:30.372 Dataset Management Command: Supported 00:16:30.372 Write Zeroes Command: Supported 00:16:30.372 Set Features Save Field: Not Supported 00:16:30.372 Reservations: Not Supported 00:16:30.372 Timestamp: Not Supported 00:16:30.372 Copy: Supported 00:16:30.372 Volatile Write Cache: Present 00:16:30.372 Atomic Write Unit (Normal): 1 00:16:30.372 Atomic Write Unit (PFail): 1 00:16:30.372 Atomic Compare & Write Unit: 1 00:16:30.372 Fused Compare & Write: Supported 00:16:30.372 Scatter-Gather List 00:16:30.372 SGL Command Set: Supported (Dword aligned) 00:16:30.372 SGL Keyed: Not Supported 00:16:30.372 SGL Bit Bucket Descriptor: Not Supported 00:16:30.372 SGL Metadata Pointer: Not Supported 00:16:30.372 Oversized SGL: Not Supported 00:16:30.372 SGL Metadata Address: Not Supported 00:16:30.372 SGL Offset: Not Supported 00:16:30.372 Transport SGL Data Block: Not Supported 00:16:30.372 Replay Protected Memory Block: Not Supported 00:16:30.372 00:16:30.372 Firmware Slot Information 00:16:30.372 ========================= 00:16:30.372 Active slot: 1 00:16:30.372 Slot 1 Firmware Revision: 24.05.1 00:16:30.372 00:16:30.372 00:16:30.372 Commands Supported and Effects 00:16:30.372 ============================== 00:16:30.372 Admin Commands 00:16:30.372 -------------- 00:16:30.372 Get Log Page (02h): Supported 00:16:30.372 Identify (06h): Supported 00:16:30.372 Abort (08h): Supported 00:16:30.372 Set Features (09h): Supported 00:16:30.372 Get Features (0Ah): Supported 00:16:30.372 Asynchronous Event Request (0Ch): Supported 00:16:30.372 Keep Alive (18h): Supported 00:16:30.372 I/O Commands 00:16:30.372 ------------ 00:16:30.372 Flush (00h): Supported LBA-Change 00:16:30.372 Write (01h): Supported LBA-Change 00:16:30.372 Read (02h): Supported 00:16:30.372 Compare (05h): Supported 00:16:30.372 Write Zeroes (08h): Supported LBA-Change 00:16:30.372 Dataset Management (09h): Supported LBA-Change 00:16:30.372 Copy (19h): Supported LBA-Change 00:16:30.372 Unknown (79h): Supported LBA-Change 00:16:30.372 Unknown (7Ah): Supported 00:16:30.372 00:16:30.372 Error Log 00:16:30.372 ========= 00:16:30.372 00:16:30.372 Arbitration 00:16:30.372 =========== 00:16:30.372 Arbitration Burst: 1 00:16:30.372 00:16:30.372 Power Management 00:16:30.372 ================ 00:16:30.372 Number of Power States: 1 00:16:30.372 Current Power State: Power State #0 00:16:30.372 Power State #0: 00:16:30.372 Max Power: 0.00 W 00:16:30.372 Non-Operational State: Operational 00:16:30.372 Entry Latency: Not Reported 00:16:30.372 Exit Latency: Not Reported 00:16:30.372 Relative Read Throughput: 0 00:16:30.372 Relative Read Latency: 0 00:16:30.372 Relative Write Throughput: 0 00:16:30.372 Relative Write Latency: 0 00:16:30.372 Idle Power: Not Reported 00:16:30.372 Active Power: Not Reported 00:16:30.372 Non-Operational Permissive Mode: Not Supported 00:16:30.372 00:16:30.372 Health Information 00:16:30.372 ================== 00:16:30.372 Critical Warnings: 00:16:30.372 Available Spare Space: OK 00:16:30.372 Temperature: OK 00:16:30.372 Device Reliability: OK 00:16:30.372 Read Only: No 00:16:30.372 Volatile Memory Backup: OK 00:16:30.372 Current Temperature: 0 Kelvin[2024-07-12 01:34:56.542366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:30.372 [2024-07-12 01:34:56.550235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:30.372 [2024-07-12 01:34:56.550262] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:30.372 [2024-07-12 01:34:56.550271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.372 [2024-07-12 01:34:56.550277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.372 [2024-07-12 01:34:56.550283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.372 [2024-07-12 01:34:56.550289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.372 [2024-07-12 01:34:56.550334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:30.372 [2024-07-12 01:34:56.550345] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:30.372 [2024-07-12 01:34:56.551338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:30.372 [2024-07-12 01:34:56.551387] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:30.373 [2024-07-12 01:34:56.551393] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:30.373 [2024-07-12 01:34:56.552339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:30.373 [2024-07-12 01:34:56.552351] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:30.373 [2024-07-12 01:34:56.552401] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:30.373 [2024-07-12 01:34:56.553773] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:30.373 (-273 Celsius) 00:16:30.373 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:30.373 Available Spare: 0% 00:16:30.373 Available Spare Threshold: 0% 00:16:30.373 Life Percentage Used: 0% 00:16:30.373 Data Units Read: 0 00:16:30.373 Data Units Written: 0 00:16:30.373 Host Read Commands: 0 00:16:30.373 Host Write Commands: 0 00:16:30.373 Controller Busy Time: 0 minutes 00:16:30.373 Power Cycles: 0 00:16:30.373 Power On Hours: 0 hours 00:16:30.373 Unsafe Shutdowns: 0 00:16:30.373 Unrecoverable Media Errors: 0 00:16:30.373 Lifetime Error Log Entries: 0 00:16:30.373 Warning Temperature Time: 0 minutes 00:16:30.373 Critical Temperature Time: 0 minutes 00:16:30.373 00:16:30.373 Number of Queues 00:16:30.373 ================ 00:16:30.373 Number of I/O Submission Queues: 127 00:16:30.373 Number of I/O Completion Queues: 127 00:16:30.373 00:16:30.373 Active Namespaces 00:16:30.373 ================= 00:16:30.373 Namespace ID:1 00:16:30.373 Error Recovery Timeout: Unlimited 00:16:30.373 Command Set Identifier: NVM (00h) 00:16:30.373 Deallocate: Supported 00:16:30.373 Deallocated/Unwritten Error: Not Supported 00:16:30.373 Deallocated Read Value: Unknown 00:16:30.373 Deallocate in Write Zeroes: Not Supported 00:16:30.373 Deallocated Guard Field: 0xFFFF 00:16:30.373 Flush: Supported 00:16:30.373 Reservation: Supported 00:16:30.373 Namespace Sharing Capabilities: Multiple Controllers 00:16:30.373 Size (in LBAs): 131072 (0GiB) 00:16:30.373 Capacity (in LBAs): 131072 (0GiB) 00:16:30.373 Utilization (in LBAs): 131072 (0GiB) 00:16:30.373 NGUID: F8AD1215809140D0AB87F37E9DD1A55C 00:16:30.373 UUID: f8ad1215-8091-40d0-ab87-f37e9dd1a55c 00:16:30.373 Thin Provisioning: Not Supported 00:16:30.373 Per-NS Atomic Units: Yes 00:16:30.373 Atomic Boundary Size (Normal): 0 00:16:30.373 Atomic Boundary Size (PFail): 0 00:16:30.373 Atomic Boundary Offset: 0 00:16:30.373 Maximum Single Source Range Length: 65535 00:16:30.373 Maximum Copy Length: 65535 00:16:30.373 Maximum Source Range Count: 1 00:16:30.373 NGUID/EUI64 Never Reused: No 00:16:30.373 Namespace Write Protected: No 00:16:30.373 Number of LBA Formats: 1 00:16:30.373 Current LBA Format: LBA Format #00 00:16:30.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:30.373 00:16:30.373 01:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:30.373 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.633 [2024-07-12 01:34:56.738267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:35.926 Initializing NVMe Controllers 00:16:35.926 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:35.926 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:35.926 Initialization complete. Launching workers. 00:16:35.926 ======================================================== 00:16:35.926 Latency(us) 00:16:35.926 Device Information : IOPS MiB/s Average min max 00:16:35.926 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40061.66 156.49 3194.94 833.26 7323.24 00:16:35.926 ======================================================== 00:16:35.926 Total : 40061.66 156.49 3194.94 833.26 7323.24 00:16:35.926 00:16:35.926 [2024-07-12 01:35:01.846425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.926 01:35:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:35.926 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.926 [2024-07-12 01:35:02.020980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.220 Initializing NVMe Controllers 00:16:41.220 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:41.220 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:41.220 Initialization complete. Launching workers. 00:16:41.220 ======================================================== 00:16:41.220 Latency(us) 00:16:41.220 Device Information : IOPS MiB/s Average min max 00:16:41.220 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36086.60 140.96 3548.03 1100.16 7383.38 00:16:41.220 ======================================================== 00:16:41.220 Total : 36086.60 140.96 3548.03 1100.16 7383.38 00:16:41.220 00:16:41.220 [2024-07-12 01:35:07.042220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.220 01:35:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:41.220 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.220 [2024-07-12 01:35:07.234414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.511 [2024-07-12 01:35:12.369310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.511 Initializing NVMe Controllers 00:16:46.511 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.511 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.511 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:46.511 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:46.511 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:46.511 Initialization complete. Launching workers. 00:16:46.511 Starting thread on core 2 00:16:46.511 Starting thread on core 3 00:16:46.511 Starting thread on core 1 00:16:46.511 01:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:46.511 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.511 [2024-07-12 01:35:12.640705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.814 [2024-07-12 01:35:15.692515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.815 Initializing NVMe Controllers 00:16:49.815 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.815 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.815 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:49.815 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:49.815 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:49.815 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:49.815 Initialization complete. Launching workers. 00:16:49.815 Starting thread on core 1 with urgent priority queue 00:16:49.815 Starting thread on core 2 with urgent priority queue 00:16:49.815 Starting thread on core 3 with urgent priority queue 00:16:49.815 Starting thread on core 0 with urgent priority queue 00:16:49.815 SPDK bdev Controller (SPDK2 ) core 0: 14173.00 IO/s 7.06 secs/100000 ios 00:16:49.815 SPDK bdev Controller (SPDK2 ) core 1: 8948.33 IO/s 11.18 secs/100000 ios 00:16:49.815 SPDK bdev Controller (SPDK2 ) core 2: 17388.00 IO/s 5.75 secs/100000 ios 00:16:49.815 SPDK bdev Controller (SPDK2 ) core 3: 10404.00 IO/s 9.61 secs/100000 ios 00:16:49.815 ======================================================== 00:16:49.815 00:16:49.815 01:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:49.815 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.815 [2024-07-12 01:35:15.963755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.815 Initializing NVMe Controllers 00:16:49.815 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.815 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.815 Namespace ID: 1 size: 0GB 00:16:49.815 Initialization complete. 00:16:49.815 INFO: using host memory buffer for IO 00:16:49.815 Hello world! 00:16:49.815 [2024-07-12 01:35:15.971792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.815 01:35:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:49.815 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.076 [2024-07-12 01:35:16.240163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.020 Initializing NVMe Controllers 00:16:51.020 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.020 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.020 Initialization complete. Launching workers. 00:16:51.020 submit (in ns) avg, min, max = 8948.0, 3897.5, 4000576.7 00:16:51.020 complete (in ns) avg, min, max = 15675.0, 2385.0, 3998991.7 00:16:51.020 00:16:51.020 Submit histogram 00:16:51.020 ================ 00:16:51.020 Range in us Cumulative Count 00:16:51.020 3.893 - 3.920: 0.1120% ( 22) 00:16:51.020 3.920 - 3.947: 1.0640% ( 187) 00:16:51.020 3.947 - 3.973: 4.9331% ( 760) 00:16:51.020 3.973 - 4.000: 12.3454% ( 1456) 00:16:51.020 4.000 - 4.027: 22.5373% ( 2002) 00:16:51.020 4.027 - 4.053: 33.8848% ( 2229) 00:16:51.020 4.053 - 4.080: 46.6680% ( 2511) 00:16:51.020 4.080 - 4.107: 63.8039% ( 3366) 00:16:51.020 4.107 - 4.133: 79.5194% ( 3087) 00:16:51.020 4.133 - 4.160: 90.5055% ( 2158) 00:16:51.020 4.160 - 4.187: 95.9477% ( 1069) 00:16:51.020 4.187 - 4.213: 98.5033% ( 502) 00:16:51.020 4.213 - 4.240: 99.3127% ( 159) 00:16:51.020 4.240 - 4.267: 99.4095% ( 19) 00:16:51.020 4.267 - 4.293: 99.4298% ( 4) 00:16:51.020 4.587 - 4.613: 99.4400% ( 2) 00:16:51.020 4.613 - 4.640: 99.4451% ( 1) 00:16:51.020 4.640 - 4.667: 99.4502% ( 1) 00:16:51.020 4.747 - 4.773: 99.4553% ( 1) 00:16:51.020 4.800 - 4.827: 99.4604% ( 1) 00:16:51.020 4.933 - 4.960: 99.4705% ( 2) 00:16:51.020 5.173 - 5.200: 99.4756% ( 1) 00:16:51.020 5.387 - 5.413: 99.4807% ( 1) 00:16:51.020 5.440 - 5.467: 99.4858% ( 1) 00:16:51.020 5.467 - 5.493: 99.4909% ( 1) 00:16:51.020 5.547 - 5.573: 99.4960% ( 1) 00:16:51.020 5.573 - 5.600: 99.5011% ( 1) 00:16:51.020 5.600 - 5.627: 99.5062% ( 1) 00:16:51.020 6.000 - 6.027: 99.5113% ( 1) 00:16:51.020 6.027 - 6.053: 99.5164% ( 1) 00:16:51.020 6.053 - 6.080: 99.5316% ( 3) 00:16:51.020 6.080 - 6.107: 99.5418% ( 2) 00:16:51.020 6.160 - 6.187: 99.5571% ( 3) 00:16:51.020 6.187 - 6.213: 99.5622% ( 1) 00:16:51.020 6.213 - 6.240: 99.5673% ( 1) 00:16:51.020 6.240 - 6.267: 99.5724% ( 1) 00:16:51.020 6.347 - 6.373: 99.5876% ( 3) 00:16:51.020 6.480 - 6.507: 99.5927% ( 1) 00:16:51.020 6.507 - 6.533: 99.6029% ( 2) 00:16:51.020 6.560 - 6.587: 99.6080% ( 1) 00:16:51.020 6.640 - 6.667: 99.6131% ( 1) 00:16:51.020 6.667 - 6.693: 99.6233% ( 2) 00:16:51.020 6.693 - 6.720: 99.6284% ( 1) 00:16:51.020 6.720 - 6.747: 99.6436% ( 3) 00:16:51.020 6.747 - 6.773: 99.6538% ( 2) 00:16:51.020 6.773 - 6.800: 99.6589% ( 1) 00:16:51.020 6.800 - 6.827: 99.6691% ( 2) 00:16:51.020 6.827 - 6.880: 99.6793% ( 2) 00:16:51.020 6.880 - 6.933: 99.6996% ( 4) 00:16:51.020 6.933 - 6.987: 99.7251% ( 5) 00:16:51.020 6.987 - 7.040: 99.7404% ( 3) 00:16:51.020 7.040 - 7.093: 99.7505% ( 2) 00:16:51.020 7.093 - 7.147: 99.7607% ( 2) 00:16:51.020 7.147 - 7.200: 99.7658% ( 1) 00:16:51.020 7.200 - 7.253: 99.7811% ( 3) 00:16:51.020 7.253 - 7.307: 99.7913% ( 2) 00:16:51.020 7.307 - 7.360: 99.7964% ( 1) 00:16:51.020 7.360 - 7.413: 99.8015% ( 1) 00:16:51.020 7.413 - 7.467: 99.8065% ( 1) 00:16:51.020 7.520 - 7.573: 99.8167% ( 2) 00:16:51.020 7.680 - 7.733: 99.8218% ( 1) 00:16:51.020 7.787 - 7.840: 99.8269% ( 1) 00:16:51.020 7.840 - 7.893: 99.8371% ( 2) 00:16:51.020 7.893 - 7.947: 99.8422% ( 1) 00:16:51.020 8.107 - 8.160: 99.8524% ( 2) 00:16:51.020 8.160 - 8.213: 99.8575% ( 1) 00:16:51.020 10.827 - 10.880: 99.8625% ( 1) 00:16:51.020 12.000 - 12.053: 99.8676% ( 1) 00:16:51.020 14.720 - 14.827: 99.8727% ( 1) 00:16:51.020 14.933 - 15.040: 99.8778% ( 1) 00:16:51.020 3495.253 - 3522.560: 99.8829% ( 1) 00:16:51.020 3986.773 - 4014.080: 100.0000% ( 23) 00:16:51.020 00:16:51.020 Complete histogram 00:16:51.021 ================== 00:16:51.021 Range in us Cumulative Count 00:16:51.021 2.373 - 2.387: 0.0153% ( 3) 00:16:51.021 2.387 - 2.400: 0.9011% ( 174) 00:16:51.021 2.400 - 2.413: 1.0131% ( 22) 00:16:51.021 2.413 - 2.427: 1.1556% ( 28) 00:16:51.021 2.427 - 2.440: 3.0953% ( 381) 00:16:51.021 2.440 - [2024-07-12 01:35:17.340895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.283 2.453: 50.6491% ( 9341) 00:16:51.283 2.453 - 2.467: 59.4105% ( 1721) 00:16:51.283 2.467 - 2.480: 76.3784% ( 3333) 00:16:51.283 2.480 - 2.493: 80.1914% ( 749) 00:16:51.283 2.493 - 2.507: 82.3245% ( 419) 00:16:51.283 2.507 - 2.520: 86.1986% ( 761) 00:16:51.283 2.520 - 2.533: 92.1753% ( 1174) 00:16:51.283 2.533 - 2.547: 95.9324% ( 738) 00:16:51.283 2.547 - 2.560: 97.8924% ( 385) 00:16:51.283 2.560 - 2.573: 98.8546% ( 189) 00:16:51.283 2.573 - 2.587: 99.2975% ( 87) 00:16:51.283 2.587 - 2.600: 99.3840% ( 17) 00:16:51.283 2.600 - 2.613: 99.4095% ( 5) 00:16:51.283 2.613 - 2.627: 99.4196% ( 2) 00:16:51.283 2.627 - 2.640: 99.4247% ( 1) 00:16:51.283 4.640 - 4.667: 99.4298% ( 1) 00:16:51.283 4.720 - 4.747: 99.4349% ( 1) 00:16:51.283 4.747 - 4.773: 99.4400% ( 1) 00:16:51.283 4.773 - 4.800: 99.4451% ( 1) 00:16:51.283 4.800 - 4.827: 99.4553% ( 2) 00:16:51.283 4.827 - 4.853: 99.4604% ( 1) 00:16:51.283 4.853 - 4.880: 99.4655% ( 1) 00:16:51.283 5.067 - 5.093: 99.4705% ( 1) 00:16:51.283 5.093 - 5.120: 99.4756% ( 1) 00:16:51.283 5.120 - 5.147: 99.4858% ( 2) 00:16:51.283 5.147 - 5.173: 99.4909% ( 1) 00:16:51.283 5.200 - 5.227: 99.4960% ( 1) 00:16:51.283 5.253 - 5.280: 99.5011% ( 1) 00:16:51.283 5.280 - 5.307: 99.5062% ( 1) 00:16:51.283 5.307 - 5.333: 99.5113% ( 1) 00:16:51.283 5.413 - 5.440: 99.5215% ( 2) 00:16:51.283 5.493 - 5.520: 99.5316% ( 2) 00:16:51.283 5.520 - 5.547: 99.5367% ( 1) 00:16:51.283 5.573 - 5.600: 99.5469% ( 2) 00:16:51.283 5.653 - 5.680: 99.5520% ( 1) 00:16:51.283 5.707 - 5.733: 99.5571% ( 1) 00:16:51.283 5.760 - 5.787: 99.5673% ( 2) 00:16:51.283 5.787 - 5.813: 99.5724% ( 1) 00:16:51.283 5.813 - 5.840: 99.5775% ( 1) 00:16:51.283 5.893 - 5.920: 99.5825% ( 1) 00:16:51.283 5.947 - 5.973: 99.5876% ( 1) 00:16:51.283 6.107 - 6.133: 99.5927% ( 1) 00:16:51.283 6.133 - 6.160: 99.5978% ( 1) 00:16:51.283 6.187 - 6.213: 99.6029% ( 1) 00:16:51.283 6.587 - 6.613: 99.6182% ( 3) 00:16:51.283 6.693 - 6.720: 99.6233% ( 1) 00:16:51.283 6.933 - 6.987: 99.6284% ( 1) 00:16:51.283 7.627 - 7.680: 99.6385% ( 2) 00:16:51.283 7.680 - 7.733: 99.6436% ( 1) 00:16:51.283 7.947 - 8.000: 99.6487% ( 1) 00:16:51.283 12.747 - 12.800: 99.6538% ( 1) 00:16:51.283 13.760 - 13.867: 99.6589% ( 1) 00:16:51.283 14.187 - 14.293: 99.6640% ( 1) 00:16:51.283 14.293 - 14.400: 99.6691% ( 1) 00:16:51.283 3577.173 - 3604.480: 99.6742% ( 1) 00:16:51.283 3986.773 - 4014.080: 100.0000% ( 64) 00:16:51.283 00:16:51.283 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:51.283 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:51.283 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:51.283 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:51.283 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:51.283 [ 00:16:51.283 { 00:16:51.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:51.284 "subtype": "Discovery", 00:16:51.284 "listen_addresses": [], 00:16:51.284 "allow_any_host": true, 00:16:51.284 "hosts": [] 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:51.284 "subtype": "NVMe", 00:16:51.284 "listen_addresses": [ 00:16:51.284 { 00:16:51.284 "trtype": "VFIOUSER", 00:16:51.284 "adrfam": "IPv4", 00:16:51.284 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:51.284 "trsvcid": "0" 00:16:51.284 } 00:16:51.284 ], 00:16:51.284 "allow_any_host": true, 00:16:51.284 "hosts": [], 00:16:51.284 "serial_number": "SPDK1", 00:16:51.284 "model_number": "SPDK bdev Controller", 00:16:51.284 "max_namespaces": 32, 00:16:51.284 "min_cntlid": 1, 00:16:51.284 "max_cntlid": 65519, 00:16:51.284 "namespaces": [ 00:16:51.284 { 00:16:51.284 "nsid": 1, 00:16:51.284 "bdev_name": "Malloc1", 00:16:51.284 "name": "Malloc1", 00:16:51.284 "nguid": "5C3DD33E40874112886641ED5FCD6E8F", 00:16:51.284 "uuid": "5c3dd33e-4087-4112-8866-41ed5fcd6e8f" 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "nsid": 2, 00:16:51.284 "bdev_name": "Malloc3", 00:16:51.284 "name": "Malloc3", 00:16:51.284 "nguid": "69768B7445D24774A1CB8A4D9873F91E", 00:16:51.284 "uuid": "69768b74-45d2-4774-a1cb-8a4d9873f91e" 00:16:51.284 } 00:16:51.284 ] 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:51.284 "subtype": "NVMe", 00:16:51.284 "listen_addresses": [ 00:16:51.284 { 00:16:51.284 "trtype": "VFIOUSER", 00:16:51.284 "adrfam": "IPv4", 00:16:51.284 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:51.284 "trsvcid": "0" 00:16:51.284 } 00:16:51.284 ], 00:16:51.284 "allow_any_host": true, 00:16:51.284 "hosts": [], 00:16:51.284 "serial_number": "SPDK2", 00:16:51.284 "model_number": "SPDK bdev Controller", 00:16:51.284 "max_namespaces": 32, 00:16:51.284 "min_cntlid": 1, 00:16:51.284 "max_cntlid": 65519, 00:16:51.284 "namespaces": [ 00:16:51.284 { 00:16:51.284 "nsid": 1, 00:16:51.284 "bdev_name": "Malloc2", 00:16:51.284 "name": "Malloc2", 00:16:51.284 "nguid": "F8AD1215809140D0AB87F37E9DD1A55C", 00:16:51.284 "uuid": "f8ad1215-8091-40d0-ab87-f37e9dd1a55c" 00:16:51.284 } 00:16:51.284 ] 00:16:51.284 } 00:16:51.284 ] 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3916913 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:51.284 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:51.284 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.545 Malloc4 00:16:51.545 [2024-07-12 01:35:17.732164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.545 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:51.545 [2024-07-12 01:35:17.887154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.807 01:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:51.807 Asynchronous Event Request test 00:16:51.807 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.807 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.807 Registering asynchronous event callbacks... 00:16:51.807 Starting namespace attribute notice tests for all controllers... 00:16:51.807 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:51.807 aer_cb - Changed Namespace 00:16:51.807 Cleaning up... 00:16:51.807 [ 00:16:51.807 { 00:16:51.807 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:51.807 "subtype": "Discovery", 00:16:51.807 "listen_addresses": [], 00:16:51.807 "allow_any_host": true, 00:16:51.807 "hosts": [] 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:51.807 "subtype": "NVMe", 00:16:51.807 "listen_addresses": [ 00:16:51.807 { 00:16:51.807 "trtype": "VFIOUSER", 00:16:51.807 "adrfam": "IPv4", 00:16:51.807 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:51.807 "trsvcid": "0" 00:16:51.807 } 00:16:51.807 ], 00:16:51.807 "allow_any_host": true, 00:16:51.807 "hosts": [], 00:16:51.807 "serial_number": "SPDK1", 00:16:51.807 "model_number": "SPDK bdev Controller", 00:16:51.807 "max_namespaces": 32, 00:16:51.807 "min_cntlid": 1, 00:16:51.807 "max_cntlid": 65519, 00:16:51.807 "namespaces": [ 00:16:51.807 { 00:16:51.807 "nsid": 1, 00:16:51.807 "bdev_name": "Malloc1", 00:16:51.807 "name": "Malloc1", 00:16:51.807 "nguid": "5C3DD33E40874112886641ED5FCD6E8F", 00:16:51.807 "uuid": "5c3dd33e-4087-4112-8866-41ed5fcd6e8f" 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "nsid": 2, 00:16:51.807 "bdev_name": "Malloc3", 00:16:51.807 "name": "Malloc3", 00:16:51.807 "nguid": "69768B7445D24774A1CB8A4D9873F91E", 00:16:51.807 "uuid": "69768b74-45d2-4774-a1cb-8a4d9873f91e" 00:16:51.807 } 00:16:51.807 ] 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:51.807 "subtype": "NVMe", 00:16:51.807 "listen_addresses": [ 00:16:51.807 { 00:16:51.807 "trtype": "VFIOUSER", 00:16:51.807 "adrfam": "IPv4", 00:16:51.807 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:51.807 "trsvcid": "0" 00:16:51.807 } 00:16:51.807 ], 00:16:51.807 "allow_any_host": true, 00:16:51.807 "hosts": [], 00:16:51.807 "serial_number": "SPDK2", 00:16:51.807 "model_number": "SPDK bdev Controller", 00:16:51.807 "max_namespaces": 32, 00:16:51.807 "min_cntlid": 1, 00:16:51.807 "max_cntlid": 65519, 00:16:51.807 "namespaces": [ 00:16:51.807 { 00:16:51.807 "nsid": 1, 00:16:51.807 "bdev_name": "Malloc2", 00:16:51.807 "name": "Malloc2", 00:16:51.807 "nguid": "F8AD1215809140D0AB87F37E9DD1A55C", 00:16:51.807 "uuid": "f8ad1215-8091-40d0-ab87-f37e9dd1a55c" 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "nsid": 2, 00:16:51.807 "bdev_name": "Malloc4", 00:16:51.807 "name": "Malloc4", 00:16:51.807 "nguid": "DFE12933E9EF4846BD6C588A9A969880", 00:16:51.807 "uuid": "dfe12933-e9ef-4846-bd6c-588a9a969880" 00:16:51.807 } 00:16:51.807 ] 00:16:51.807 } 00:16:51.807 ] 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3916913 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3907849 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3907849 ']' 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3907849 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3907849 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3907849' 00:16:51.807 killing process with pid 3907849 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3907849 00:16:51.807 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3907849 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3917032 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3917032' 00:16:52.070 Process pid: 3917032 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3917032 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3917032 ']' 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.070 01:35:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:52.070 [2024-07-12 01:35:18.350455] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:52.070 [2024-07-12 01:35:18.351405] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:52.070 [2024-07-12 01:35:18.351447] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.070 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.070 [2024-07-12 01:35:18.421100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.332 [2024-07-12 01:35:18.453786] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.332 [2024-07-12 01:35:18.453826] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.332 [2024-07-12 01:35:18.453834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.332 [2024-07-12 01:35:18.453840] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.332 [2024-07-12 01:35:18.453846] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.332 [2024-07-12 01:35:18.453988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.332 [2024-07-12 01:35:18.454109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.332 [2024-07-12 01:35:18.454268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.332 [2024-07-12 01:35:18.454282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.332 [2024-07-12 01:35:18.520205] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:52.332 [2024-07-12 01:35:18.520214] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:52.332 [2024-07-12 01:35:18.521247] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:52.332 [2024-07-12 01:35:18.521808] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:52.332 [2024-07-12 01:35:18.521883] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:52.905 01:35:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.905 01:35:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:52.905 01:35:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:53.849 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:54.110 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:54.110 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:54.110 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.110 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:54.110 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:54.110 Malloc1 00:16:54.372 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:54.372 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:54.633 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:54.633 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.633 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:54.633 01:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:54.941 Malloc2 00:16:54.941 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:55.202 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:55.202 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3917032 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3917032 ']' 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3917032 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917032 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917032' 00:16:55.463 killing process with pid 3917032 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3917032 00:16:55.463 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3917032 00:16:55.725 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:55.725 01:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:55.725 00:16:55.725 real 0m50.527s 00:16:55.725 user 3m20.525s 00:16:55.725 sys 0m3.126s 00:16:55.725 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:55.725 01:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:55.725 ************************************ 00:16:55.725 END TEST nvmf_vfio_user 00:16:55.725 ************************************ 00:16:55.725 01:35:21 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:55.725 01:35:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:55.725 01:35:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:55.725 01:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.725 ************************************ 00:16:55.725 START TEST nvmf_vfio_user_nvme_compliance 00:16:55.725 ************************************ 00:16:55.725 01:35:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:55.726 * Looking for test storage... 00:16:55.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3917934 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3917934' 00:16:55.726 Process pid: 3917934 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3917934 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3917934 ']' 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:55.726 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:55.988 [2024-07-12 01:35:22.113542] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:55.988 [2024-07-12 01:35:22.113595] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.988 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.988 [2024-07-12 01:35:22.180832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.988 [2024-07-12 01:35:22.211815] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.988 [2024-07-12 01:35:22.211855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.988 [2024-07-12 01:35:22.211863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.988 [2024-07-12 01:35:22.211869] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.988 [2024-07-12 01:35:22.211875] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.988 [2024-07-12 01:35:22.212011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.988 [2024-07-12 01:35:22.212134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.988 [2024-07-12 01:35:22.212137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.561 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.561 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:56.561 01:35:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.943 malloc0 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.943 01:35:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:57.943 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.943 00:16:57.943 00:16:57.943 CUnit - A unit testing framework for C - Version 2.1-3 00:16:57.943 http://cunit.sourceforge.net/ 00:16:57.943 00:16:57.943 00:16:57.943 Suite: nvme_compliance 00:16:57.943 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 01:35:24.143675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.943 [2024-07-12 01:35:24.148039] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:57.943 [2024-07-12 01:35:24.148050] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:57.943 [2024-07-12 01:35:24.148055] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:57.943 [2024-07-12 01:35:24.149709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.943 passed 00:16:57.943 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 01:35:24.242277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.943 [2024-07-12 01:35:24.245290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.943 passed 00:16:58.203 Test: admin_identify_ns ...[2024-07-12 01:35:24.340457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.204 [2024-07-12 01:35:24.404242] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:58.204 [2024-07-12 01:35:24.412244] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:58.204 [2024-07-12 01:35:24.433352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.204 passed 00:16:58.204 Test: admin_get_features_mandatory_features ...[2024-07-12 01:35:24.525008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.204 [2024-07-12 01:35:24.528022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.464 passed 00:16:58.464 Test: admin_get_features_optional_features ...[2024-07-12 01:35:24.621545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.464 [2024-07-12 01:35:24.624558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.464 passed 00:16:58.464 Test: admin_set_features_number_of_queues ...[2024-07-12 01:35:24.718688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.726 [2024-07-12 01:35:24.823341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.726 passed 00:16:58.726 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 01:35:24.914964] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.726 [2024-07-12 01:35:24.917985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.726 passed 00:16:58.726 Test: admin_get_log_page_with_lpo ...[2024-07-12 01:35:25.012134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.726 [2024-07-12 01:35:25.077243] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:58.989 [2024-07-12 01:35:25.090301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.989 passed 00:16:58.989 Test: fabric_property_get ...[2024-07-12 01:35:25.184313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.989 [2024-07-12 01:35:25.185558] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:58.989 [2024-07-12 01:35:25.187331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.989 passed 00:16:58.989 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 01:35:25.282002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.989 [2024-07-12 01:35:25.283250] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:58.989 [2024-07-12 01:35:25.285024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.989 passed 00:16:59.248 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 01:35:25.377136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.248 [2024-07-12 01:35:25.462238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:59.248 [2024-07-12 01:35:25.478238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:59.248 [2024-07-12 01:35:25.483327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.248 passed 00:16:59.248 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 01:35:25.574955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.248 [2024-07-12 01:35:25.576188] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:59.248 [2024-07-12 01:35:25.577979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.507 passed 00:16:59.507 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 01:35:25.669473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.507 [2024-07-12 01:35:25.745245] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:59.507 [2024-07-12 01:35:25.769238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:59.507 [2024-07-12 01:35:25.774327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.507 passed 00:16:59.765 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 01:35:25.868313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.765 [2024-07-12 01:35:25.869546] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:59.765 [2024-07-12 01:35:25.869567] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:59.765 [2024-07-12 01:35:25.871330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.765 passed 00:16:59.765 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 01:35:25.964471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.765 [2024-07-12 01:35:26.056238] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:59.765 [2024-07-12 01:35:26.064237] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:59.765 [2024-07-12 01:35:26.072237] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:59.765 [2024-07-12 01:35:26.080238] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:59.765 [2024-07-12 01:35:26.105323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.025 passed 00:17:00.025 Test: admin_create_io_sq_verify_pc ...[2024-07-12 01:35:26.194918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.025 [2024-07-12 01:35:26.210244] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:00.026 [2024-07-12 01:35:26.228063] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.026 passed 00:17:00.026 Test: admin_create_io_qp_max_qps ...[2024-07-12 01:35:26.321577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.406 [2024-07-12 01:35:27.439239] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:01.665 [2024-07-12 01:35:27.822371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.665 passed 00:17:01.665 Test: admin_create_io_sq_shared_cq ...[2024-07-12 01:35:27.916948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.924 [2024-07-12 01:35:28.047234] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:01.924 [2024-07-12 01:35:28.084419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.924 passed 00:17:01.924 00:17:01.924 Run Summary: Type Total Ran Passed Failed Inactive 00:17:01.924 suites 1 1 n/a 0 0 00:17:01.924 tests 18 18 18 0 0 00:17:01.924 asserts 360 360 360 0 n/a 00:17:01.924 00:17:01.924 Elapsed time = 1.653 seconds 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3917934 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3917934 ']' 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3917934 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917934 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917934' 00:17:01.924 killing process with pid 3917934 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3917934 00:17:01.924 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3917934 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:02.185 00:17:02.185 real 0m6.396s 00:17:02.185 user 0m18.371s 00:17:02.185 sys 0m0.493s 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.185 ************************************ 00:17:02.185 END TEST nvmf_vfio_user_nvme_compliance 00:17:02.185 ************************************ 00:17:02.185 01:35:28 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:02.185 01:35:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:02.185 01:35:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.185 01:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.185 ************************************ 00:17:02.185 START TEST nvmf_vfio_user_fuzz 00:17:02.185 ************************************ 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:02.185 * Looking for test storage... 00:17:02.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.185 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3919100 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3919100' 00:17:02.186 Process pid: 3919100 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3919100 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3919100 ']' 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.186 01:35:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:03.126 01:35:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:03.126 01:35:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:17:03.126 01:35:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.067 malloc0 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.067 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:04.328 01:35:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:36.482 Fuzzing completed. Shutting down the fuzz application 00:17:36.482 00:17:36.482 Dumping successful admin opcodes: 00:17:36.482 8, 9, 10, 24, 00:17:36.482 Dumping successful io opcodes: 00:17:36.482 0, 00:17:36.482 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1157038, total successful commands: 4552, random_seed: 2585141696 00:17:36.482 NS: 0x200003a1ef00 admin qp, Total commands completed: 145588, total successful commands: 1180, random_seed: 700538624 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3919100 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3919100 ']' 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3919100 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3919100 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3919100' 00:17:36.482 killing process with pid 3919100 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3919100 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3919100 00:17:36.482 01:36:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:36.482 01:36:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:36.482 00:17:36.482 real 0m33.641s 00:17:36.482 user 0m38.026s 00:17:36.482 sys 0m26.402s 00:17:36.482 01:36:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.482 01:36:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:36.482 ************************************ 00:17:36.482 END TEST nvmf_vfio_user_fuzz 00:17:36.482 ************************************ 00:17:36.482 01:36:02 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:36.482 01:36:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:36.482 01:36:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.482 01:36:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:36.482 ************************************ 00:17:36.482 START TEST nvmf_host_management 00:17:36.482 ************************************ 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:36.482 * Looking for test storage... 00:17:36.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.482 01:36:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.623 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:44.624 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:44.624 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:44.624 Found net devices under 0000:31:00.0: cvl_0_0 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:44.624 Found net devices under 0000:31:00.1: cvl_0_1 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:44.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:17:44.624 00:17:44.624 --- 10.0.0.2 ping statistics --- 00:17:44.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.624 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:17:44.624 00:17:44.624 --- 10.0.0.1 ping statistics --- 00:17:44.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.624 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3930418 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3930418 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3930418 ']' 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.624 01:36:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:44.624 [2024-07-12 01:36:10.662114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:44.624 [2024-07-12 01:36:10.662188] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.624 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.624 [2024-07-12 01:36:10.760863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.624 [2024-07-12 01:36:10.810660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.624 [2024-07-12 01:36:10.810717] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.624 [2024-07-12 01:36:10.810726] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.624 [2024-07-12 01:36:10.810733] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.624 [2024-07-12 01:36:10.810739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.624 [2024-07-12 01:36:10.810868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.624 [2024-07-12 01:36:10.811692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.625 [2024-07-12 01:36:10.811852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.625 [2024-07-12 01:36:10.811853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.195 [2024-07-12 01:36:11.487796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.195 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.195 Malloc0 00:17:45.195 [2024-07-12 01:36:11.548596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.455 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.455 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:45.455 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3930650 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3930650 /var/tmp/bdevperf.sock 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3930650 ']' 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:45.456 { 00:17:45.456 "params": { 00:17:45.456 "name": "Nvme$subsystem", 00:17:45.456 "trtype": "$TEST_TRANSPORT", 00:17:45.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:45.456 "adrfam": "ipv4", 00:17:45.456 "trsvcid": "$NVMF_PORT", 00:17:45.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:45.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:45.456 "hdgst": ${hdgst:-false}, 00:17:45.456 "ddgst": ${ddgst:-false} 00:17:45.456 }, 00:17:45.456 "method": "bdev_nvme_attach_controller" 00:17:45.456 } 00:17:45.456 EOF 00:17:45.456 )") 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:45.456 01:36:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:45.456 "params": { 00:17:45.456 "name": "Nvme0", 00:17:45.456 "trtype": "tcp", 00:17:45.456 "traddr": "10.0.0.2", 00:17:45.456 "adrfam": "ipv4", 00:17:45.456 "trsvcid": "4420", 00:17:45.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:45.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:45.456 "hdgst": false, 00:17:45.456 "ddgst": false 00:17:45.456 }, 00:17:45.456 "method": "bdev_nvme_attach_controller" 00:17:45.456 }' 00:17:45.456 [2024-07-12 01:36:11.656188] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:45.456 [2024-07-12 01:36:11.656243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930650 ] 00:17:45.456 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.456 [2024-07-12 01:36:11.722303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.456 [2024-07-12 01:36:11.753193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.715 Running I/O for 10 seconds... 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=769 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 769 -ge 100 ']' 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 [2024-07-12 01:36:12.491614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a00100 is same with the state(5) to be set 00:17:46.287 [2024-07-12 01:36:12.491688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a00100 is same with the state(5) to be set 00:17:46.287 [2024-07-12 01:36:12.491697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a00100 is same with the state(5) to be set 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.287 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 [2024-07-12 01:36:12.499693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.287 [2024-07-12 01:36:12.499728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.287 [2024-07-12 01:36:12.499746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.287 [2024-07-12 01:36:12.499761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.287 [2024-07-12 01:36:12.499776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18425a0 is same with the state(5) to be set 00:17:46.287 [2024-07-12 01:36:12.499870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.287 [2024-07-12 01:36:12.499881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.287 [2024-07-12 01:36:12.499903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.287 [2024-07-12 01:36:12.499919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.287 [2024-07-12 01:36:12.499936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.287 [2024-07-12 01:36:12.499957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.287 [2024-07-12 01:36:12.499966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.287 [2024-07-12 01:36:12.499973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.499982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.499989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.499998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.288 [2024-07-12 01:36:12.500675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.288 [2024-07-12 01:36:12.500684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.289 [2024-07-12 01:36:12.500941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.289 [2024-07-12 01:36:12.500989] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c535a0 was disconnected and freed. reset controller. 00:17:46.289 [2024-07-12 01:36:12.502162] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.289 task offset: 106496 on job bdev=Nvme0n1 fails 00:17:46.289 00:17:46.289 Latency(us) 00:17:46.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.289 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:46.289 Job: Nvme0n1 ended in about 0.57 seconds with error 00:17:46.289 Verification LBA range: start 0x0 length 0x400 00:17:46.289 Nvme0n1 : 0.57 1468.01 91.75 112.92 0.00 39521.04 1843.20 34734.08 00:17:46.289 =================================================================================================================== 00:17:46.289 Total : 1468.01 91.75 112.92 0.00 39521.04 1843.20 34734.08 00:17:46.289 [2024-07-12 01:36:12.504135] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:46.289 [2024-07-12 01:36:12.504156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18425a0 (9): Bad file descriptor 00:17:46.289 01:36:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.289 01:36:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:46.289 [2024-07-12 01:36:12.556845] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3930650 00:17:47.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3930650) - No such process 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.235 { 00:17:47.235 "params": { 00:17:47.235 "name": "Nvme$subsystem", 00:17:47.235 "trtype": "$TEST_TRANSPORT", 00:17:47.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.235 "adrfam": "ipv4", 00:17:47.235 "trsvcid": "$NVMF_PORT", 00:17:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.235 "hdgst": ${hdgst:-false}, 00:17:47.235 "ddgst": ${ddgst:-false} 00:17:47.235 }, 00:17:47.235 "method": "bdev_nvme_attach_controller" 00:17:47.235 } 00:17:47.235 EOF 00:17:47.235 )") 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:47.235 01:36:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.235 "params": { 00:17:47.235 "name": "Nvme0", 00:17:47.235 "trtype": "tcp", 00:17:47.235 "traddr": "10.0.0.2", 00:17:47.235 "adrfam": "ipv4", 00:17:47.235 "trsvcid": "4420", 00:17:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:47.235 "hdgst": false, 00:17:47.235 "ddgst": false 00:17:47.235 }, 00:17:47.235 "method": "bdev_nvme_attach_controller" 00:17:47.235 }' 00:17:47.235 [2024-07-12 01:36:13.565821] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:47.235 [2024-07-12 01:36:13.565876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931004 ] 00:17:47.496 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.496 [2024-07-12 01:36:13.632781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.496 [2024-07-12 01:36:13.662055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.757 Running I/O for 1 seconds... 00:17:48.700 00:17:48.700 Latency(us) 00:17:48.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.700 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:48.700 Verification LBA range: start 0x0 length 0x400 00:17:48.700 Nvme0n1 : 1.04 1481.19 92.57 0.00 0.00 42477.18 8901.97 35826.35 00:17:48.700 =================================================================================================================== 00:17:48.700 Total : 1481.19 92.57 0.00 0.00 42477.18 8901.97 35826.35 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.961 rmmod nvme_tcp 00:17:48.961 rmmod nvme_fabrics 00:17:48.961 rmmod nvme_keyring 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3930418 ']' 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3930418 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3930418 ']' 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3930418 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3930418 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3930418' 00:17:48.961 killing process with pid 3930418 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3930418 00:17:48.961 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3930418 00:17:48.961 [2024-07-12 01:36:15.307294] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.223 01:36:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.135 01:36:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.135 01:36:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:51.136 00:17:51.136 real 0m15.289s 00:17:51.136 user 0m22.819s 00:17:51.136 sys 0m7.268s 00:17:51.136 01:36:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:51.136 01:36:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:51.136 ************************************ 00:17:51.136 END TEST nvmf_host_management 00:17:51.136 ************************************ 00:17:51.136 01:36:17 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:51.136 01:36:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:51.136 01:36:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:51.136 01:36:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.136 ************************************ 00:17:51.136 START TEST nvmf_lvol 00:17:51.136 ************************************ 00:17:51.136 01:36:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:51.398 * Looking for test storage... 00:17:51.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.398 01:36:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:59.541 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.541 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:59.542 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:59.542 Found net devices under 0000:31:00.0: cvl_0_0 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:59.542 Found net devices under 0000:31:00.1: cvl_0_1 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:17:59.542 00:17:59.542 --- 10.0.0.2 ping statistics --- 00:17:59.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.542 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:17:59.542 00:17:59.542 --- 10.0.0.1 ping statistics --- 00:17:59.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.542 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3936015 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3936015 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3936015 ']' 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.542 01:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:59.542 [2024-07-12 01:36:25.709927] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:59.542 [2024-07-12 01:36:25.709974] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.542 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.542 [2024-07-12 01:36:25.783241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.542 [2024-07-12 01:36:25.814778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.542 [2024-07-12 01:36:25.814815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.542 [2024-07-12 01:36:25.814823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.542 [2024-07-12 01:36:25.814829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.542 [2024-07-12 01:36:25.814835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.542 [2024-07-12 01:36:25.814970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.542 [2024-07-12 01:36:25.815084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.542 [2024-07-12 01:36:25.815087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:00.486 [2024-07-12 01:36:26.664646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.486 01:36:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.747 01:36:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:00.747 01:36:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.747 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:00.747 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:01.007 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:01.268 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fdb8a754-0fc2-4d02-8708-b22e19dcabf5 00:18:01.268 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fdb8a754-0fc2-4d02-8708-b22e19dcabf5 lvol 20 00:18:01.268 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dc35e93b-95b0-43b7-8a84-95d028cfd813 00:18:01.268 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:01.528 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc35e93b-95b0-43b7-8a84-95d028cfd813 00:18:01.788 01:36:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:01.788 [2024-07-12 01:36:28.051214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.788 01:36:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:02.049 01:36:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3936587 00:18:02.049 01:36:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:02.049 01:36:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:02.049 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.994 01:36:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dc35e93b-95b0-43b7-8a84-95d028cfd813 MY_SNAPSHOT 00:18:03.316 01:36:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=450b2f83-5a2e-4e5e-8f1b-6c7a7476edcb 00:18:03.316 01:36:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dc35e93b-95b0-43b7-8a84-95d028cfd813 30 00:18:03.316 01:36:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 450b2f83-5a2e-4e5e-8f1b-6c7a7476edcb MY_CLONE 00:18:03.604 01:36:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e8ad7ccc-40f9-4e1f-a5ff-b51ff482fa1d 00:18:03.605 01:36:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e8ad7ccc-40f9-4e1f-a5ff-b51ff482fa1d 00:18:04.176 01:36:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3936587 00:18:12.309 Initializing NVMe Controllers 00:18:12.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:12.309 Controller IO queue size 128, less than required. 00:18:12.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:12.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:12.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:12.309 Initialization complete. Launching workers. 00:18:12.309 ======================================================== 00:18:12.309 Latency(us) 00:18:12.309 Device Information : IOPS MiB/s Average min max 00:18:12.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12282.00 47.98 10424.22 1446.51 58481.58 00:18:12.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17707.70 69.17 7227.73 611.03 34281.34 00:18:12.309 ======================================================== 00:18:12.309 Total : 29989.70 117.15 8536.82 611.03 58481.58 00:18:12.309 00:18:12.309 01:36:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:12.570 01:36:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc35e93b-95b0-43b7-8a84-95d028cfd813 00:18:12.570 01:36:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fdb8a754-0fc2-4d02-8708-b22e19dcabf5 00:18:12.832 01:36:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.832 rmmod nvme_tcp 00:18:12.832 rmmod nvme_fabrics 00:18:12.832 rmmod nvme_keyring 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3936015 ']' 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3936015 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3936015 ']' 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3936015 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3936015 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3936015' 00:18:12.832 killing process with pid 3936015 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3936015 00:18:12.832 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3936015 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.092 01:36:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.007 01:36:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.007 00:18:15.007 real 0m23.849s 00:18:15.007 user 1m3.525s 00:18:15.007 sys 0m8.322s 00:18:15.007 01:36:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:15.007 01:36:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:15.007 ************************************ 00:18:15.007 END TEST nvmf_lvol 00:18:15.007 ************************************ 00:18:15.269 01:36:41 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:15.269 01:36:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:15.269 01:36:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:15.269 01:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.269 ************************************ 00:18:15.269 START TEST nvmf_lvs_grow 00:18:15.269 ************************************ 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:15.269 * Looking for test storage... 00:18:15.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.269 01:36:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:23.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:23.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:23.416 Found net devices under 0000:31:00.0: cvl_0_0 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.416 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:23.417 Found net devices under 0000:31:00.1: cvl_0_1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:18:23.417 00:18:23.417 --- 10.0.0.2 ping statistics --- 00:18:23.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.417 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:18:23.417 00:18:23.417 --- 10.0.0.1 ping statistics --- 00:18:23.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.417 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3943402 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3943402 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3943402 ']' 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:23.417 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:23.417 [2024-07-12 01:36:49.723649] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:23.417 [2024-07-12 01:36:49.723698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.417 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.678 [2024-07-12 01:36:49.799874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.678 [2024-07-12 01:36:49.829857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.678 [2024-07-12 01:36:49.829898] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.678 [2024-07-12 01:36:49.829906] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.678 [2024-07-12 01:36:49.829913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.678 [2024-07-12 01:36:49.829919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.678 [2024-07-12 01:36:49.829937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.678 01:36:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:23.941 [2024-07-12 01:36:50.092896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:23.941 ************************************ 00:18:23.941 START TEST lvs_grow_clean 00:18:23.941 ************************************ 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:23.941 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:24.202 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:24.202 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:24.202 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:24.202 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:24.202 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:24.462 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:24.462 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:24.462 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 008cbc72-8071-4ef2-8a42-346adaf4298a lvol 150 00:18:24.723 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=acef51e6-f831-4213-aaa0-732b28f7e2bf 00:18:24.723 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:24.723 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:24.723 [2024-07-12 01:36:50.971805] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:24.723 [2024-07-12 01:36:50.971858] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:24.723 true 00:18:24.723 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:24.723 01:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:24.983 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:24.983 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:24.983 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 acef51e6-f831-4213-aaa0-732b28f7e2bf 00:18:25.242 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:25.242 [2024-07-12 01:36:51.561601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.242 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3943785 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3943785 /var/tmp/bdevperf.sock 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3943785 ']' 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.501 01:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:25.501 [2024-07-12 01:36:51.780749] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:25.501 [2024-07-12 01:36:51.780799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943785 ] 00:18:25.501 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.761 [2024-07-12 01:36:51.862639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.761 [2024-07-12 01:36:51.893659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.331 01:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.331 01:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:18:26.331 01:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:26.592 Nvme0n1 00:18:26.592 01:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:26.854 [ 00:18:26.854 { 00:18:26.854 "name": "Nvme0n1", 00:18:26.854 "aliases": [ 00:18:26.854 "acef51e6-f831-4213-aaa0-732b28f7e2bf" 00:18:26.854 ], 00:18:26.854 "product_name": "NVMe disk", 00:18:26.854 "block_size": 4096, 00:18:26.854 "num_blocks": 38912, 00:18:26.854 "uuid": "acef51e6-f831-4213-aaa0-732b28f7e2bf", 00:18:26.854 "assigned_rate_limits": { 00:18:26.854 "rw_ios_per_sec": 0, 00:18:26.854 "rw_mbytes_per_sec": 0, 00:18:26.854 "r_mbytes_per_sec": 0, 00:18:26.854 "w_mbytes_per_sec": 0 00:18:26.854 }, 00:18:26.854 "claimed": false, 00:18:26.854 "zoned": false, 00:18:26.854 "supported_io_types": { 00:18:26.854 "read": true, 00:18:26.854 "write": true, 00:18:26.854 "unmap": true, 00:18:26.854 "write_zeroes": true, 00:18:26.854 "flush": true, 00:18:26.854 "reset": true, 00:18:26.854 "compare": true, 00:18:26.854 "compare_and_write": true, 00:18:26.854 "abort": true, 00:18:26.854 "nvme_admin": true, 00:18:26.854 "nvme_io": true 00:18:26.854 }, 00:18:26.854 "memory_domains": [ 00:18:26.854 { 00:18:26.854 "dma_device_id": "system", 00:18:26.854 "dma_device_type": 1 00:18:26.854 } 00:18:26.854 ], 00:18:26.854 "driver_specific": { 00:18:26.854 "nvme": [ 00:18:26.854 { 00:18:26.854 "trid": { 00:18:26.854 "trtype": "TCP", 00:18:26.854 "adrfam": "IPv4", 00:18:26.854 "traddr": "10.0.0.2", 00:18:26.854 "trsvcid": "4420", 00:18:26.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:26.854 }, 00:18:26.854 "ctrlr_data": { 00:18:26.854 "cntlid": 1, 00:18:26.854 "vendor_id": "0x8086", 00:18:26.854 "model_number": "SPDK bdev Controller", 00:18:26.854 "serial_number": "SPDK0", 00:18:26.854 "firmware_revision": "24.05.1", 00:18:26.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:26.854 "oacs": { 00:18:26.854 "security": 0, 00:18:26.854 "format": 0, 00:18:26.854 "firmware": 0, 00:18:26.854 "ns_manage": 0 00:18:26.854 }, 00:18:26.854 "multi_ctrlr": true, 00:18:26.854 "ana_reporting": false 00:18:26.854 }, 00:18:26.854 "vs": { 00:18:26.854 "nvme_version": "1.3" 00:18:26.854 }, 00:18:26.854 "ns_data": { 00:18:26.854 "id": 1, 00:18:26.854 "can_share": true 00:18:26.854 } 00:18:26.854 } 00:18:26.854 ], 00:18:26.854 "mp_policy": "active_passive" 00:18:26.854 } 00:18:26.854 } 00:18:26.854 ] 00:18:26.854 01:36:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3944115 00:18:26.854 01:36:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:26.854 01:36:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.854 Running I/O for 10 seconds... 00:18:28.236 Latency(us) 00:18:28.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.236 Nvme0n1 : 1.00 17918.00 69.99 0.00 0.00 0.00 0.00 0.00 00:18:28.236 =================================================================================================================== 00:18:28.236 Total : 17918.00 69.99 0.00 0.00 0.00 0.00 0.00 00:18:28.236 00:18:28.806 01:36:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:28.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.806 Nvme0n1 : 2.00 18076.00 70.61 0.00 0.00 0.00 0.00 0.00 00:18:28.806 =================================================================================================================== 00:18:28.806 Total : 18076.00 70.61 0.00 0.00 0.00 0.00 0.00 00:18:28.806 00:18:29.066 true 00:18:29.066 01:36:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:29.066 01:36:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:29.326 01:36:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:29.326 01:36:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:29.326 01:36:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3944115 00:18:29.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.895 Nvme0n1 : 3.00 18148.67 70.89 0.00 0.00 0.00 0.00 0.00 00:18:29.895 =================================================================================================================== 00:18:29.895 Total : 18148.67 70.89 0.00 0.00 0.00 0.00 0.00 00:18:29.895 00:18:30.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.835 Nvme0n1 : 4.00 18185.75 71.04 0.00 0.00 0.00 0.00 0.00 00:18:30.835 =================================================================================================================== 00:18:30.835 Total : 18185.75 71.04 0.00 0.00 0.00 0.00 0.00 00:18:30.835 00:18:32.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.212 Nvme0n1 : 5.00 18220.40 71.17 0.00 0.00 0.00 0.00 0.00 00:18:32.212 =================================================================================================================== 00:18:32.212 Total : 18220.40 71.17 0.00 0.00 0.00 0.00 0.00 00:18:32.212 00:18:33.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.179 Nvme0n1 : 6.00 18243.67 71.26 0.00 0.00 0.00 0.00 0.00 00:18:33.179 =================================================================================================================== 00:18:33.179 Total : 18243.67 71.26 0.00 0.00 0.00 0.00 0.00 00:18:33.179 00:18:34.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.120 Nvme0n1 : 7.00 18260.00 71.33 0.00 0.00 0.00 0.00 0.00 00:18:34.120 =================================================================================================================== 00:18:34.120 Total : 18260.00 71.33 0.00 0.00 0.00 0.00 0.00 00:18:34.120 00:18:35.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.061 Nvme0n1 : 8.00 18271.75 71.37 0.00 0.00 0.00 0.00 0.00 00:18:35.061 =================================================================================================================== 00:18:35.061 Total : 18271.75 71.37 0.00 0.00 0.00 0.00 0.00 00:18:35.061 00:18:36.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.005 Nvme0n1 : 9.00 18283.44 71.42 0.00 0.00 0.00 0.00 0.00 00:18:36.005 =================================================================================================================== 00:18:36.005 Total : 18283.44 71.42 0.00 0.00 0.00 0.00 0.00 00:18:36.005 00:18:36.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.946 Nvme0n1 : 10.00 18295.50 71.47 0.00 0.00 0.00 0.00 0.00 00:18:36.946 =================================================================================================================== 00:18:36.946 Total : 18295.50 71.47 0.00 0.00 0.00 0.00 0.00 00:18:36.946 00:18:36.946 00:18:36.946 Latency(us) 00:18:36.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.946 Nvme0n1 : 10.01 18298.20 71.48 0.00 0.00 6992.43 4314.45 17148.59 00:18:36.946 =================================================================================================================== 00:18:36.946 Total : 18298.20 71.48 0.00 0.00 6992.43 4314.45 17148.59 00:18:36.946 0 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3943785 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3943785 ']' 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3943785 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943785 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943785' 00:18:36.946 killing process with pid 3943785 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3943785 00:18:36.946 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.946 00:18:36.946 Latency(us) 00:18:36.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.946 =================================================================================================================== 00:18:36.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.946 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3943785 00:18:37.205 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:37.205 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:37.464 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:37.464 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:37.724 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:37.724 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:37.724 01:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:37.724 [2024-07-12 01:37:03.969644] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:37.724 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:37.984 request: 00:18:37.984 { 00:18:37.984 "uuid": "008cbc72-8071-4ef2-8a42-346adaf4298a", 00:18:37.984 "method": "bdev_lvol_get_lvstores", 00:18:37.984 "req_id": 1 00:18:37.984 } 00:18:37.984 Got JSON-RPC error response 00:18:37.984 response: 00:18:37.984 { 00:18:37.984 "code": -19, 00:18:37.984 "message": "No such device" 00:18:37.984 } 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:37.984 aio_bdev 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev acef51e6-f831-4213-aaa0-732b28f7e2bf 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=acef51e6-f831-4213-aaa0-732b28f7e2bf 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:37.984 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:38.244 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b acef51e6-f831-4213-aaa0-732b28f7e2bf -t 2000 00:18:38.505 [ 00:18:38.505 { 00:18:38.505 "name": "acef51e6-f831-4213-aaa0-732b28f7e2bf", 00:18:38.505 "aliases": [ 00:18:38.505 "lvs/lvol" 00:18:38.505 ], 00:18:38.505 "product_name": "Logical Volume", 00:18:38.505 "block_size": 4096, 00:18:38.505 "num_blocks": 38912, 00:18:38.505 "uuid": "acef51e6-f831-4213-aaa0-732b28f7e2bf", 00:18:38.505 "assigned_rate_limits": { 00:18:38.505 "rw_ios_per_sec": 0, 00:18:38.505 "rw_mbytes_per_sec": 0, 00:18:38.505 "r_mbytes_per_sec": 0, 00:18:38.505 "w_mbytes_per_sec": 0 00:18:38.505 }, 00:18:38.505 "claimed": false, 00:18:38.505 "zoned": false, 00:18:38.505 "supported_io_types": { 00:18:38.505 "read": true, 00:18:38.505 "write": true, 00:18:38.505 "unmap": true, 00:18:38.505 "write_zeroes": true, 00:18:38.505 "flush": false, 00:18:38.505 "reset": true, 00:18:38.505 "compare": false, 00:18:38.505 "compare_and_write": false, 00:18:38.505 "abort": false, 00:18:38.505 "nvme_admin": false, 00:18:38.505 "nvme_io": false 00:18:38.505 }, 00:18:38.506 "driver_specific": { 00:18:38.506 "lvol": { 00:18:38.506 "lvol_store_uuid": "008cbc72-8071-4ef2-8a42-346adaf4298a", 00:18:38.506 "base_bdev": "aio_bdev", 00:18:38.506 "thin_provision": false, 00:18:38.506 "num_allocated_clusters": 38, 00:18:38.506 "snapshot": false, 00:18:38.506 "clone": false, 00:18:38.506 "esnap_clone": false 00:18:38.506 } 00:18:38.506 } 00:18:38.506 } 00:18:38.506 ] 00:18:38.506 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:18:38.506 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:38.506 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:38.506 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:38.506 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:38.506 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:38.765 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:38.766 01:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete acef51e6-f831-4213-aaa0-732b28f7e2bf 00:18:38.766 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 008cbc72-8071-4ef2-8a42-346adaf4298a 00:18:39.026 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:39.286 00:18:39.286 real 0m15.255s 00:18:39.286 user 0m14.954s 00:18:39.286 sys 0m1.310s 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:39.286 ************************************ 00:18:39.286 END TEST lvs_grow_clean 00:18:39.286 ************************************ 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:39.286 ************************************ 00:18:39.286 START TEST lvs_grow_dirty 00:18:39.286 ************************************ 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:18:39.286 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:39.287 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:39.548 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:39.548 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:39.548 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cc0973cf-02d3-424c-9975-a237ed69a510 00:18:39.548 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:39.548 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:39.808 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:39.808 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:39.808 01:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc0973cf-02d3-424c-9975-a237ed69a510 lvol 150 00:18:39.808 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:39.808 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:39.808 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:40.069 [2024-07-12 01:37:06.272558] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:40.069 [2024-07-12 01:37:06.272613] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:40.069 true 00:18:40.069 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:40.069 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:40.330 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:40.330 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:40.330 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:40.591 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:40.591 [2024-07-12 01:37:06.890436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.591 01:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3946857 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3946857 /var/tmp/bdevperf.sock 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3946857 ']' 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.851 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:40.851 [2024-07-12 01:37:07.089841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:40.851 [2024-07-12 01:37:07.089892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946857 ] 00:18:40.851 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.851 [2024-07-12 01:37:07.170443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.851 [2024-07-12 01:37:07.199048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.792 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.792 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:41.792 01:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:41.792 Nvme0n1 00:18:42.051 01:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:42.051 [ 00:18:42.051 { 00:18:42.051 "name": "Nvme0n1", 00:18:42.051 "aliases": [ 00:18:42.051 "3eec7b2c-b129-4736-9c33-3895f0a7335b" 00:18:42.051 ], 00:18:42.051 "product_name": "NVMe disk", 00:18:42.051 "block_size": 4096, 00:18:42.051 "num_blocks": 38912, 00:18:42.051 "uuid": "3eec7b2c-b129-4736-9c33-3895f0a7335b", 00:18:42.051 "assigned_rate_limits": { 00:18:42.051 "rw_ios_per_sec": 0, 00:18:42.051 "rw_mbytes_per_sec": 0, 00:18:42.051 "r_mbytes_per_sec": 0, 00:18:42.051 "w_mbytes_per_sec": 0 00:18:42.051 }, 00:18:42.051 "claimed": false, 00:18:42.051 "zoned": false, 00:18:42.051 "supported_io_types": { 00:18:42.051 "read": true, 00:18:42.051 "write": true, 00:18:42.051 "unmap": true, 00:18:42.051 "write_zeroes": true, 00:18:42.051 "flush": true, 00:18:42.051 "reset": true, 00:18:42.051 "compare": true, 00:18:42.051 "compare_and_write": true, 00:18:42.051 "abort": true, 00:18:42.051 "nvme_admin": true, 00:18:42.051 "nvme_io": true 00:18:42.051 }, 00:18:42.051 "memory_domains": [ 00:18:42.051 { 00:18:42.051 "dma_device_id": "system", 00:18:42.051 "dma_device_type": 1 00:18:42.051 } 00:18:42.051 ], 00:18:42.051 "driver_specific": { 00:18:42.051 "nvme": [ 00:18:42.051 { 00:18:42.051 "trid": { 00:18:42.051 "trtype": "TCP", 00:18:42.051 "adrfam": "IPv4", 00:18:42.051 "traddr": "10.0.0.2", 00:18:42.051 "trsvcid": "4420", 00:18:42.051 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:42.051 }, 00:18:42.051 "ctrlr_data": { 00:18:42.051 "cntlid": 1, 00:18:42.051 "vendor_id": "0x8086", 00:18:42.051 "model_number": "SPDK bdev Controller", 00:18:42.051 "serial_number": "SPDK0", 00:18:42.051 "firmware_revision": "24.05.1", 00:18:42.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:42.051 "oacs": { 00:18:42.051 "security": 0, 00:18:42.051 "format": 0, 00:18:42.051 "firmware": 0, 00:18:42.051 "ns_manage": 0 00:18:42.051 }, 00:18:42.051 "multi_ctrlr": true, 00:18:42.051 "ana_reporting": false 00:18:42.051 }, 00:18:42.051 "vs": { 00:18:42.051 "nvme_version": "1.3" 00:18:42.051 }, 00:18:42.051 "ns_data": { 00:18:42.051 "id": 1, 00:18:42.051 "can_share": true 00:18:42.051 } 00:18:42.051 } 00:18:42.051 ], 00:18:42.051 "mp_policy": "active_passive" 00:18:42.051 } 00:18:42.051 } 00:18:42.051 ] 00:18:42.051 01:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3947099 00:18:42.051 01:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:42.051 01:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.051 Running I/O for 10 seconds... 00:18:43.434 Latency(us) 00:18:43.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.434 Nvme0n1 : 1.00 17982.00 70.24 0.00 0.00 0.00 0.00 0.00 00:18:43.434 =================================================================================================================== 00:18:43.434 Total : 17982.00 70.24 0.00 0.00 0.00 0.00 0.00 00:18:43.434 00:18:44.006 01:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:44.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.266 Nvme0n1 : 2.00 18071.50 70.59 0.00 0.00 0.00 0.00 0.00 00:18:44.266 =================================================================================================================== 00:18:44.266 Total : 18071.50 70.59 0.00 0.00 0.00 0.00 0.00 00:18:44.266 00:18:44.266 true 00:18:44.266 01:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:44.266 01:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:44.527 01:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:44.527 01:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:44.527 01:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3947099 00:18:45.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.115 Nvme0n1 : 3.00 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:18:45.115 =================================================================================================================== 00:18:45.115 Total : 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:18:45.115 00:18:46.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.059 Nvme0n1 : 4.00 18186.50 71.04 0.00 0.00 0.00 0.00 0.00 00:18:46.059 =================================================================================================================== 00:18:46.059 Total : 18186.50 71.04 0.00 0.00 0.00 0.00 0.00 00:18:46.059 00:18:47.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.444 Nvme0n1 : 5.00 18209.00 71.13 0.00 0.00 0.00 0.00 0.00 00:18:47.444 =================================================================================================================== 00:18:47.444 Total : 18209.00 71.13 0.00 0.00 0.00 0.00 0.00 00:18:47.444 00:18:48.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.387 Nvme0n1 : 6.00 18235.33 71.23 0.00 0.00 0.00 0.00 0.00 00:18:48.387 =================================================================================================================== 00:18:48.387 Total : 18235.33 71.23 0.00 0.00 0.00 0.00 0.00 00:18:48.387 00:18:49.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.327 Nvme0n1 : 7.00 18252.43 71.30 0.00 0.00 0.00 0.00 0.00 00:18:49.327 =================================================================================================================== 00:18:49.327 Total : 18252.43 71.30 0.00 0.00 0.00 0.00 0.00 00:18:49.327 00:18:50.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.267 Nvme0n1 : 8.00 18271.88 71.37 0.00 0.00 0.00 0.00 0.00 00:18:50.267 =================================================================================================================== 00:18:50.267 Total : 18271.88 71.37 0.00 0.00 0.00 0.00 0.00 00:18:50.267 00:18:51.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.207 Nvme0n1 : 9.00 18281.67 71.41 0.00 0.00 0.00 0.00 0.00 00:18:51.207 =================================================================================================================== 00:18:51.207 Total : 18281.67 71.41 0.00 0.00 0.00 0.00 0.00 00:18:51.207 00:18:52.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.150 Nvme0n1 : 10.00 18295.90 71.47 0.00 0.00 0.00 0.00 0.00 00:18:52.150 =================================================================================================================== 00:18:52.150 Total : 18295.90 71.47 0.00 0.00 0.00 0.00 0.00 00:18:52.150 00:18:52.150 00:18:52.150 Latency(us) 00:18:52.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.150 Nvme0n1 : 10.01 18296.89 71.47 0.00 0.00 6993.35 4287.15 16165.55 00:18:52.150 =================================================================================================================== 00:18:52.150 Total : 18296.89 71.47 0.00 0.00 6993.35 4287.15 16165.55 00:18:52.150 0 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3946857 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3946857 ']' 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3946857 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3946857 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3946857' 00:18:52.150 killing process with pid 3946857 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3946857 00:18:52.150 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.150 00:18:52.150 Latency(us) 00:18:52.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.150 =================================================================================================================== 00:18:52.150 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.150 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3946857 00:18:52.411 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:52.411 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:52.672 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:52.672 01:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3943402 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3943402 00:18:52.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3943402 Killed "${NVMF_APP[@]}" "$@" 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3949219 00:18:52.932 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3949219 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3949219 ']' 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:52.933 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:52.933 [2024-07-12 01:37:19.168693] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:52.933 [2024-07-12 01:37:19.168749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.933 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.933 [2024-07-12 01:37:19.242952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.933 [2024-07-12 01:37:19.276003] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.933 [2024-07-12 01:37:19.276045] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.933 [2024-07-12 01:37:19.276052] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.933 [2024-07-12 01:37:19.276059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.933 [2024-07-12 01:37:19.276064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.933 [2024-07-12 01:37:19.276089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.873 01:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:53.873 [2024-07-12 01:37:20.102824] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:53.873 [2024-07-12 01:37:20.102926] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:53.873 [2024-07-12 01:37:20.102957] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:53.873 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:54.134 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3eec7b2c-b129-4736-9c33-3895f0a7335b -t 2000 00:18:54.134 [ 00:18:54.134 { 00:18:54.134 "name": "3eec7b2c-b129-4736-9c33-3895f0a7335b", 00:18:54.134 "aliases": [ 00:18:54.134 "lvs/lvol" 00:18:54.134 ], 00:18:54.134 "product_name": "Logical Volume", 00:18:54.134 "block_size": 4096, 00:18:54.134 "num_blocks": 38912, 00:18:54.134 "uuid": "3eec7b2c-b129-4736-9c33-3895f0a7335b", 00:18:54.134 "assigned_rate_limits": { 00:18:54.134 "rw_ios_per_sec": 0, 00:18:54.134 "rw_mbytes_per_sec": 0, 00:18:54.134 "r_mbytes_per_sec": 0, 00:18:54.134 "w_mbytes_per_sec": 0 00:18:54.134 }, 00:18:54.134 "claimed": false, 00:18:54.134 "zoned": false, 00:18:54.134 "supported_io_types": { 00:18:54.134 "read": true, 00:18:54.134 "write": true, 00:18:54.134 "unmap": true, 00:18:54.134 "write_zeroes": true, 00:18:54.134 "flush": false, 00:18:54.134 "reset": true, 00:18:54.134 "compare": false, 00:18:54.134 "compare_and_write": false, 00:18:54.134 "abort": false, 00:18:54.134 "nvme_admin": false, 00:18:54.134 "nvme_io": false 00:18:54.134 }, 00:18:54.134 "driver_specific": { 00:18:54.134 "lvol": { 00:18:54.134 "lvol_store_uuid": "cc0973cf-02d3-424c-9975-a237ed69a510", 00:18:54.134 "base_bdev": "aio_bdev", 00:18:54.134 "thin_provision": false, 00:18:54.134 "num_allocated_clusters": 38, 00:18:54.134 "snapshot": false, 00:18:54.134 "clone": false, 00:18:54.134 "esnap_clone": false 00:18:54.134 } 00:18:54.134 } 00:18:54.134 } 00:18:54.134 ] 00:18:54.134 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:54.134 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:54.134 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:54.394 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:54.394 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:54.394 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:54.655 [2024-07-12 01:37:20.918831] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:54.655 01:37:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:54.915 request: 00:18:54.915 { 00:18:54.915 "uuid": "cc0973cf-02d3-424c-9975-a237ed69a510", 00:18:54.915 "method": "bdev_lvol_get_lvstores", 00:18:54.915 "req_id": 1 00:18:54.915 } 00:18:54.915 Got JSON-RPC error response 00:18:54.915 response: 00:18:54.915 { 00:18:54.915 "code": -19, 00:18:54.915 "message": "No such device" 00:18:54.915 } 00:18:54.915 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:54.915 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.915 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.915 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.915 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:54.915 aio_bdev 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:55.175 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3eec7b2c-b129-4736-9c33-3895f0a7335b -t 2000 00:18:55.435 [ 00:18:55.435 { 00:18:55.435 "name": "3eec7b2c-b129-4736-9c33-3895f0a7335b", 00:18:55.435 "aliases": [ 00:18:55.435 "lvs/lvol" 00:18:55.435 ], 00:18:55.435 "product_name": "Logical Volume", 00:18:55.435 "block_size": 4096, 00:18:55.435 "num_blocks": 38912, 00:18:55.435 "uuid": "3eec7b2c-b129-4736-9c33-3895f0a7335b", 00:18:55.435 "assigned_rate_limits": { 00:18:55.435 "rw_ios_per_sec": 0, 00:18:55.435 "rw_mbytes_per_sec": 0, 00:18:55.435 "r_mbytes_per_sec": 0, 00:18:55.435 "w_mbytes_per_sec": 0 00:18:55.435 }, 00:18:55.435 "claimed": false, 00:18:55.435 "zoned": false, 00:18:55.435 "supported_io_types": { 00:18:55.435 "read": true, 00:18:55.435 "write": true, 00:18:55.435 "unmap": true, 00:18:55.435 "write_zeroes": true, 00:18:55.435 "flush": false, 00:18:55.435 "reset": true, 00:18:55.435 "compare": false, 00:18:55.435 "compare_and_write": false, 00:18:55.435 "abort": false, 00:18:55.435 "nvme_admin": false, 00:18:55.435 "nvme_io": false 00:18:55.435 }, 00:18:55.435 "driver_specific": { 00:18:55.435 "lvol": { 00:18:55.435 "lvol_store_uuid": "cc0973cf-02d3-424c-9975-a237ed69a510", 00:18:55.435 "base_bdev": "aio_bdev", 00:18:55.435 "thin_provision": false, 00:18:55.435 "num_allocated_clusters": 38, 00:18:55.435 "snapshot": false, 00:18:55.435 "clone": false, 00:18:55.435 "esnap_clone": false 00:18:55.435 } 00:18:55.435 } 00:18:55.435 } 00:18:55.435 ] 00:18:55.435 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:55.435 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:55.435 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:55.435 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:55.435 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:55.435 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:55.695 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:55.695 01:37:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3eec7b2c-b129-4736-9c33-3895f0a7335b 00:18:55.695 01:37:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc0973cf-02d3-424c-9975-a237ed69a510 00:18:55.955 01:37:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:56.216 00:18:56.216 real 0m16.882s 00:18:56.216 user 0m43.669s 00:18:56.216 sys 0m2.886s 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:56.216 ************************************ 00:18:56.216 END TEST lvs_grow_dirty 00:18:56.216 ************************************ 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:56.216 nvmf_trace.0 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.216 rmmod nvme_tcp 00:18:56.216 rmmod nvme_fabrics 00:18:56.216 rmmod nvme_keyring 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3949219 ']' 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3949219 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3949219 ']' 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3949219 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:56.216 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3949219 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3949219' 00:18:56.477 killing process with pid 3949219 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3949219 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3949219 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.477 01:37:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.020 01:37:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:59.020 00:18:59.020 real 0m43.372s 00:18:59.020 user 1m4.708s 00:18:59.020 sys 0m10.643s 00:18:59.020 01:37:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:59.020 01:37:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:59.020 ************************************ 00:18:59.020 END TEST nvmf_lvs_grow 00:18:59.020 ************************************ 00:18:59.020 01:37:24 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:59.020 01:37:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:59.020 01:37:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:59.020 01:37:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.020 ************************************ 00:18:59.020 START TEST nvmf_bdev_io_wait 00:18:59.020 ************************************ 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:59.020 * Looking for test storage... 00:18:59.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.020 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.021 01:37:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.021 01:37:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:59.021 01:37:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:59.021 01:37:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.021 01:37:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:07.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:07.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:07.163 Found net devices under 0000:31:00.0: cvl_0_0 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:07.163 Found net devices under 0000:31:00.1: cvl_0_1 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.163 01:37:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:19:07.163 00:19:07.163 --- 10.0.0.2 ping statistics --- 00:19:07.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.163 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:19:07.163 00:19:07.163 --- 10.0.0.1 ping statistics --- 00:19:07.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.163 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3954642 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3954642 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3954642 ']' 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.163 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:07.164 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.164 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:07.164 01:37:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.164 [2024-07-12 01:37:33.410549] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:07.164 [2024-07-12 01:37:33.410616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.164 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.164 [2024-07-12 01:37:33.490439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.423 [2024-07-12 01:37:33.530971] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.423 [2024-07-12 01:37:33.531015] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.424 [2024-07-12 01:37:33.531024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.424 [2024-07-12 01:37:33.531031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.424 [2024-07-12 01:37:33.531037] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.424 [2024-07-12 01:37:33.531179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.424 [2024-07-12 01:37:33.531327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.424 [2024-07-12 01:37:33.531388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.424 [2024-07-12 01:37:33.531388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.994 [2024-07-12 01:37:34.291486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.994 Malloc0 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.994 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:08.256 [2024-07-12 01:37:34.365565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3954790 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3954793 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:08.256 { 00:19:08.256 "params": { 00:19:08.256 "name": "Nvme$subsystem", 00:19:08.256 "trtype": "$TEST_TRANSPORT", 00:19:08.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.256 "adrfam": "ipv4", 00:19:08.256 "trsvcid": "$NVMF_PORT", 00:19:08.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.256 "hdgst": ${hdgst:-false}, 00:19:08.256 "ddgst": ${ddgst:-false} 00:19:08.256 }, 00:19:08.256 "method": "bdev_nvme_attach_controller" 00:19:08.256 } 00:19:08.256 EOF 00:19:08.256 )") 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3954796 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3954799 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:08.256 { 00:19:08.256 "params": { 00:19:08.256 "name": "Nvme$subsystem", 00:19:08.256 "trtype": "$TEST_TRANSPORT", 00:19:08.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.256 "adrfam": "ipv4", 00:19:08.256 "trsvcid": "$NVMF_PORT", 00:19:08.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.256 "hdgst": ${hdgst:-false}, 00:19:08.256 "ddgst": ${ddgst:-false} 00:19:08.256 }, 00:19:08.256 "method": "bdev_nvme_attach_controller" 00:19:08.256 } 00:19:08.256 EOF 00:19:08.256 )") 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:08.256 { 00:19:08.256 "params": { 00:19:08.256 "name": "Nvme$subsystem", 00:19:08.256 "trtype": "$TEST_TRANSPORT", 00:19:08.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.256 "adrfam": "ipv4", 00:19:08.256 "trsvcid": "$NVMF_PORT", 00:19:08.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.256 "hdgst": ${hdgst:-false}, 00:19:08.256 "ddgst": ${ddgst:-false} 00:19:08.256 }, 00:19:08.256 "method": "bdev_nvme_attach_controller" 00:19:08.256 } 00:19:08.256 EOF 00:19:08.256 )") 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:08.256 { 00:19:08.256 "params": { 00:19:08.256 "name": "Nvme$subsystem", 00:19:08.256 "trtype": "$TEST_TRANSPORT", 00:19:08.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.256 "adrfam": "ipv4", 00:19:08.256 "trsvcid": "$NVMF_PORT", 00:19:08.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.256 "hdgst": ${hdgst:-false}, 00:19:08.256 "ddgst": ${ddgst:-false} 00:19:08.256 }, 00:19:08.256 "method": "bdev_nvme_attach_controller" 00:19:08.256 } 00:19:08.256 EOF 00:19:08.256 )") 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3954790 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:08.256 "params": { 00:19:08.256 "name": "Nvme1", 00:19:08.256 "trtype": "tcp", 00:19:08.256 "traddr": "10.0.0.2", 00:19:08.256 "adrfam": "ipv4", 00:19:08.256 "trsvcid": "4420", 00:19:08.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.256 "hdgst": false, 00:19:08.256 "ddgst": false 00:19:08.256 }, 00:19:08.256 "method": "bdev_nvme_attach_controller" 00:19:08.256 }' 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:08.256 "params": { 00:19:08.256 "name": "Nvme1", 00:19:08.256 "trtype": "tcp", 00:19:08.256 "traddr": "10.0.0.2", 00:19:08.256 "adrfam": "ipv4", 00:19:08.256 "trsvcid": "4420", 00:19:08.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.256 "hdgst": false, 00:19:08.256 "ddgst": false 00:19:08.256 }, 00:19:08.256 "method": "bdev_nvme_attach_controller" 00:19:08.256 }' 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:08.256 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:08.256 "params": { 00:19:08.257 "name": "Nvme1", 00:19:08.257 "trtype": "tcp", 00:19:08.257 "traddr": "10.0.0.2", 00:19:08.257 "adrfam": "ipv4", 00:19:08.257 "trsvcid": "4420", 00:19:08.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.257 "hdgst": false, 00:19:08.257 "ddgst": false 00:19:08.257 }, 00:19:08.257 "method": "bdev_nvme_attach_controller" 00:19:08.257 }' 00:19:08.257 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:08.257 01:37:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:08.257 "params": { 00:19:08.257 "name": "Nvme1", 00:19:08.257 "trtype": "tcp", 00:19:08.257 "traddr": "10.0.0.2", 00:19:08.257 "adrfam": "ipv4", 00:19:08.257 "trsvcid": "4420", 00:19:08.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.257 "hdgst": false, 00:19:08.257 "ddgst": false 00:19:08.257 }, 00:19:08.257 "method": "bdev_nvme_attach_controller" 00:19:08.257 }' 00:19:08.257 [2024-07-12 01:37:34.418252] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:08.257 [2024-07-12 01:37:34.418303] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:08.257 [2024-07-12 01:37:34.420444] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:08.257 [2024-07-12 01:37:34.420497] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:08.257 [2024-07-12 01:37:34.421891] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:08.257 [2024-07-12 01:37:34.421937] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:08.257 [2024-07-12 01:37:34.422202] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:08.257 [2024-07-12 01:37:34.422251] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:08.257 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.257 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.257 [2024-07-12 01:37:34.567249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.257 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.257 [2024-07-12 01:37:34.584091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:08.517 [2024-07-12 01:37:34.619837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.517 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.517 [2024-07-12 01:37:34.637989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:08.517 [2024-07-12 01:37:34.680986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.517 [2024-07-12 01:37:34.700909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:08.517 [2024-07-12 01:37:34.726224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.517 [2024-07-12 01:37:34.744746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:08.517 Running I/O for 1 seconds... 00:19:08.517 Running I/O for 1 seconds... 00:19:08.517 Running I/O for 1 seconds... 00:19:08.777 Running I/O for 1 seconds... 00:19:09.718 00:19:09.718 Latency(us) 00:19:09.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.718 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:09.718 Nvme1n1 : 1.00 187955.85 734.20 0.00 0.00 678.10 274.77 757.76 00:19:09.718 =================================================================================================================== 00:19:09.718 Total : 187955.85 734.20 0.00 0.00 678.10 274.77 757.76 00:19:09.718 00:19:09.718 Latency(us) 00:19:09.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.718 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:09.718 Nvme1n1 : 1.02 8108.64 31.67 0.00 0.00 15636.24 7318.19 25012.91 00:19:09.718 =================================================================================================================== 00:19:09.718 Total : 8108.64 31.67 0.00 0.00 15636.24 7318.19 25012.91 00:19:09.718 00:19:09.718 Latency(us) 00:19:09.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.718 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:09.718 Nvme1n1 : 1.00 20796.36 81.24 0.00 0.00 6140.36 3399.68 13216.43 00:19:09.718 =================================================================================================================== 00:19:09.718 Total : 20796.36 81.24 0.00 0.00 6140.36 3399.68 13216.43 00:19:09.718 00:19:09.718 Latency(us) 00:19:09.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.718 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:09.718 Nvme1n1 : 1.00 8353.63 32.63 0.00 0.00 15281.72 4314.45 38229.33 00:19:09.718 =================================================================================================================== 00:19:09.718 Total : 8353.63 32.63 0.00 0.00 15281.72 4314.45 38229.33 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3954793 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3954796 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3954799 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.979 rmmod nvme_tcp 00:19:09.979 rmmod nvme_fabrics 00:19:09.979 rmmod nvme_keyring 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3954642 ']' 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3954642 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3954642 ']' 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3954642 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3954642 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3954642' 00:19:09.979 killing process with pid 3954642 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3954642 00:19:09.979 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3954642 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.239 01:37:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.147 01:37:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.147 00:19:12.147 real 0m13.579s 00:19:12.147 user 0m18.852s 00:19:12.147 sys 0m7.586s 00:19:12.147 01:37:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:12.147 01:37:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.147 ************************************ 00:19:12.147 END TEST nvmf_bdev_io_wait 00:19:12.147 ************************************ 00:19:12.147 01:37:38 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:12.147 01:37:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:12.147 01:37:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:12.147 01:37:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:12.406 ************************************ 00:19:12.406 START TEST nvmf_queue_depth 00:19:12.406 ************************************ 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:12.406 * Looking for test storage... 00:19:12.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.406 01:37:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.407 01:37:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:20.544 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:20.544 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:20.544 Found net devices under 0000:31:00.0: cvl_0_0 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:20.544 Found net devices under 0000:31:00.1: cvl_0_1 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.544 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:19:20.545 00:19:20.545 --- 10.0.0.2 ping statistics --- 00:19:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.545 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:19:20.545 00:19:20.545 --- 10.0.0.1 ping statistics --- 00:19:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.545 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3959834 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3959834 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3959834 ']' 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.545 01:37:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 [2024-07-12 01:37:46.759941] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:20.545 [2024-07-12 01:37:46.760006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.545 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.545 [2024-07-12 01:37:46.857484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.806 [2024-07-12 01:37:46.904514] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.806 [2024-07-12 01:37:46.904570] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.806 [2024-07-12 01:37:46.904579] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.806 [2024-07-12 01:37:46.904585] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.806 [2024-07-12 01:37:46.904591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.806 [2024-07-12 01:37:46.904625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 [2024-07-12 01:37:47.590134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 Malloc0 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.377 [2024-07-12 01:37:47.644349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3960062 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:21.377 01:37:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3960062 /var/tmp/bdevperf.sock 00:19:21.378 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3960062 ']' 00:19:21.378 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.378 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.378 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.378 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.378 01:37:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:21.378 [2024-07-12 01:37:47.695967] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:21.378 [2024-07-12 01:37:47.696020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960062 ] 00:19:21.378 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.638 [2024-07-12 01:37:47.763663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.638 [2024-07-12 01:37:47.799153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 NVMe0n1 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.209 01:37:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.470 Running I/O for 10 seconds... 00:19:32.471 00:19:32.471 Latency(us) 00:19:32.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.471 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:32.471 Verification LBA range: start 0x0 length 0x4000 00:19:32.471 NVMe0n1 : 10.05 11129.93 43.48 0.00 0.00 91648.82 10376.53 74274.13 00:19:32.471 =================================================================================================================== 00:19:32.471 Total : 11129.93 43.48 0.00 0.00 91648.82 10376.53 74274.13 00:19:32.471 0 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3960062 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3960062 ']' 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3960062 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3960062 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3960062' 00:19:32.471 killing process with pid 3960062 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3960062 00:19:32.471 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.471 00:19:32.471 Latency(us) 00:19:32.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.471 =================================================================================================================== 00:19:32.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.471 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3960062 00:19:32.731 01:37:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:32.731 01:37:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:32.731 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.731 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:19:32.731 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.731 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.732 rmmod nvme_tcp 00:19:32.732 rmmod nvme_fabrics 00:19:32.732 rmmod nvme_keyring 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3959834 ']' 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3959834 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3959834 ']' 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3959834 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:32.732 01:37:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3959834 00:19:32.732 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:32.732 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:32.732 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3959834' 00:19:32.732 killing process with pid 3959834 00:19:32.732 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3959834 00:19:32.732 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3959834 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.992 01:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.986 01:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.986 00:19:34.986 real 0m22.692s 00:19:34.986 user 0m25.526s 00:19:34.986 sys 0m7.169s 00:19:34.986 01:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:34.986 01:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 ************************************ 00:19:34.986 END TEST nvmf_queue_depth 00:19:34.986 ************************************ 00:19:34.986 01:38:01 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:34.986 01:38:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:34.986 01:38:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:34.986 01:38:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 ************************************ 00:19:34.986 START TEST nvmf_target_multipath 00:19:34.986 ************************************ 00:19:34.986 01:38:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:35.247 * Looking for test storage... 00:19:35.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.247 01:38:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.248 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.248 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.248 01:38:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.248 01:38:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:43.388 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:43.388 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:43.388 Found net devices under 0000:31:00.0: cvl_0_0 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:43.388 Found net devices under 0000:31:00.1: cvl_0_1 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:43.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:19:43.388 00:19:43.388 --- 10.0.0.2 ping statistics --- 00:19:43.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.388 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:19:43.388 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:19:43.388 00:19:43.389 --- 10.0.0.1 ping statistics --- 00:19:43.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.389 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:43.389 only one NIC for nvmf test 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.389 rmmod nvme_tcp 00:19:43.389 rmmod nvme_fabrics 00:19:43.389 rmmod nvme_keyring 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.389 01:38:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.931 00:19:45.931 real 0m10.488s 00:19:45.931 user 0m2.332s 00:19:45.931 sys 0m6.056s 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:45.931 01:38:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:45.931 ************************************ 00:19:45.931 END TEST nvmf_target_multipath 00:19:45.931 ************************************ 00:19:45.931 01:38:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:45.931 01:38:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:45.931 01:38:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:45.931 01:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.931 ************************************ 00:19:45.931 START TEST nvmf_zcopy 00:19:45.931 ************************************ 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:45.931 * Looking for test storage... 00:19:45.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.931 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.932 01:38:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.932 01:38:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.932 01:38:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.932 01:38:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.932 01:38:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:54.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:54.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:54.074 Found net devices under 0000:31:00.0: cvl_0_0 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:54.074 Found net devices under 0000:31:00.1: cvl_0_1 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.074 01:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:54.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:19:54.074 00:19:54.074 --- 10.0.0.2 ping statistics --- 00:19:54.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.074 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:19:54.074 00:19:54.074 --- 10.0.0.1 ping statistics --- 00:19:54.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.074 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3971509 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3971509 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3971509 ']' 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:54.074 01:38:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.075 [2024-07-12 01:38:20.316133] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:54.075 [2024-07-12 01:38:20.316203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.075 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.075 [2024-07-12 01:38:20.412030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.336 [2024-07-12 01:38:20.458573] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.336 [2024-07-12 01:38:20.458630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.336 [2024-07-12 01:38:20.458644] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.336 [2024-07-12 01:38:20.458651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.336 [2024-07-12 01:38:20.458656] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.336 [2024-07-12 01:38:20.458679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.910 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.911 [2024-07-12 01:38:21.152099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.911 [2024-07-12 01:38:21.176329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.911 malloc0 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.911 { 00:19:54.911 "params": { 00:19:54.911 "name": "Nvme$subsystem", 00:19:54.911 "trtype": "$TEST_TRANSPORT", 00:19:54.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.911 "adrfam": "ipv4", 00:19:54.911 "trsvcid": "$NVMF_PORT", 00:19:54.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.911 "hdgst": ${hdgst:-false}, 00:19:54.911 "ddgst": ${ddgst:-false} 00:19:54.911 }, 00:19:54.911 "method": "bdev_nvme_attach_controller" 00:19:54.911 } 00:19:54.911 EOF 00:19:54.911 )") 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:54.911 01:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.911 "params": { 00:19:54.911 "name": "Nvme1", 00:19:54.911 "trtype": "tcp", 00:19:54.911 "traddr": "10.0.0.2", 00:19:54.911 "adrfam": "ipv4", 00:19:54.911 "trsvcid": "4420", 00:19:54.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.911 "hdgst": false, 00:19:54.911 "ddgst": false 00:19:54.911 }, 00:19:54.911 "method": "bdev_nvme_attach_controller" 00:19:54.911 }' 00:19:55.173 [2024-07-12 01:38:21.275656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:55.173 [2024-07-12 01:38:21.275723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971770 ] 00:19:55.173 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.173 [2024-07-12 01:38:21.346018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.173 [2024-07-12 01:38:21.384680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.434 Running I/O for 10 seconds... 00:20:05.447 00:20:05.447 Latency(us) 00:20:05.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:05.447 Verification LBA range: start 0x0 length 0x1000 00:20:05.447 Nvme1n1 : 10.01 9423.19 73.62 0.00 0.00 13531.67 1774.93 25777.49 00:20:05.447 =================================================================================================================== 00:20:05.447 Total : 9423.19 73.62 0.00 0.00 13531.67 1774.93 25777.49 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3973777 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.447 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.447 { 00:20:05.447 "params": { 00:20:05.447 "name": "Nvme$subsystem", 00:20:05.447 "trtype": "$TEST_TRANSPORT", 00:20:05.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.447 "adrfam": "ipv4", 00:20:05.447 "trsvcid": "$NVMF_PORT", 00:20:05.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.447 "hdgst": ${hdgst:-false}, 00:20:05.447 "ddgst": ${ddgst:-false} 00:20:05.447 }, 00:20:05.447 "method": "bdev_nvme_attach_controller" 00:20:05.447 } 00:20:05.447 EOF 00:20:05.447 )") 00:20:05.708 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:20:05.708 [2024-07-12 01:38:31.806010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.806038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:20:05.708 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:20:05.708 01:38:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:05.708 "params": { 00:20:05.708 "name": "Nvme1", 00:20:05.708 "trtype": "tcp", 00:20:05.708 "traddr": "10.0.0.2", 00:20:05.708 "adrfam": "ipv4", 00:20:05.708 "trsvcid": "4420", 00:20:05.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.708 "hdgst": false, 00:20:05.708 "ddgst": false 00:20:05.708 }, 00:20:05.708 "method": "bdev_nvme_attach_controller" 00:20:05.708 }' 00:20:05.708 [2024-07-12 01:38:31.818010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.818020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.830040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.830048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.842071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.842078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.844763] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:05.708 [2024-07-12 01:38:31.844809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973777 ] 00:20:05.708 [2024-07-12 01:38:31.854102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.854109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.866134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.866141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.708 [2024-07-12 01:38:31.878165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.878172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.890196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.890202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.902226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.902236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.909695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.708 [2024-07-12 01:38:31.914262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.914269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.926292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.926304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.938323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.938335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.940408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.708 [2024-07-12 01:38:31.950356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.950364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.962392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.962404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.974420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.974428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.986448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.986455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:31.998477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:31.998487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:32.010542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:32.010556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:32.022560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:32.022569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:32.034590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:32.034600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:32.046619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:32.046627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.708 [2024-07-12 01:38:32.058651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.708 [2024-07-12 01:38:32.058658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.070680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.070687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.082712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.082719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.094741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.094749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.106771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.106777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.118802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.118808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.130836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.130844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.142865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.142871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.154895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.154902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.166927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.166935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.178960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.178967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.190998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.191013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 Running I/O for 5 seconds... 00:20:05.970 [2024-07-12 01:38:32.203023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.203033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.218123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.218140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.231765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.231781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.245192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.245209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.257825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.257841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.270802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.270817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.284614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.284629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.297357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.297372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.310571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.310587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.970 [2024-07-12 01:38:32.323673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.970 [2024-07-12 01:38:32.323688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.337094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.337110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.350089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.350104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.363117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.363132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.376818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.376833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.389991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.390006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.403420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.403435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.416482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.416497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.429381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.429397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.443407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.443422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.455732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.455748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.469305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.469320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.482500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.482515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.495946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.495961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.508850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.508865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.521377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.521392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.534399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.534415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.547021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.547036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.559878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.559893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.573078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.573093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.230 [2024-07-12 01:38:32.585858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.230 [2024-07-12 01:38:32.585873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.599279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.599296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.611848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.611864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.624593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.624608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.637858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.637874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.651177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.651192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.664464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.664479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.677976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.677991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.691264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.691279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.704586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.704601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.717882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.717901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.731765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.731780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.744619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.744634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.757764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.757779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.489 [2024-07-12 01:38:32.770761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.489 [2024-07-12 01:38:32.770776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.490 [2024-07-12 01:38:32.783609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.490 [2024-07-12 01:38:32.783624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.490 [2024-07-12 01:38:32.796492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.490 [2024-07-12 01:38:32.796506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.490 [2024-07-12 01:38:32.809308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.490 [2024-07-12 01:38:32.809323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.490 [2024-07-12 01:38:32.821961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.490 [2024-07-12 01:38:32.821976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.490 [2024-07-12 01:38:32.835233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.490 [2024-07-12 01:38:32.835248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.848499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.848515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.861671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.861686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.875304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.875319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.888480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.888495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.901262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.901277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.914645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.914659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.927518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.927532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.940725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.940740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.953997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.954012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.967409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.967427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.980832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.980847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:32.993223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:32.993242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.006281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.006296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.019782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.019797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.033486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.033501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.046917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.046933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.060383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.060397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.073485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.073500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.085610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.085625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.751 [2024-07-12 01:38:33.099344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.751 [2024-07-12 01:38:33.099359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.112315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.112330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.125541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.125555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.138309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.138323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.151839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.151853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.165638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.165653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.177889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.177904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.191202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.191216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.203706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.203720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.217207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.217227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.229800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.229815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.242542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.242556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.255598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.255612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.268806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.268820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.282353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.282368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.295799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.295814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.309316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.309331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.321767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.321782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.334734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.334749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.347575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.347589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.012 [2024-07-12 01:38:33.361050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.012 [2024-07-12 01:38:33.361064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.374309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.374324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.387604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.387619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.400924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.400938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.414118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.414132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.426895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.426909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.439778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.439793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.452335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.452349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.465479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.465497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.478277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.478292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.491423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.491438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.504698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.504713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.518034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.518049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.530706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.530721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.543391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.543406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.556732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.556747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.569564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.569580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.582586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.582600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.595476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.595490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.607880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.607895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.275 [2024-07-12 01:38:33.620459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.275 [2024-07-12 01:38:33.620473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.632857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.632872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.645701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.645715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.658929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.658945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.671879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.671893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.684735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.684749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.697867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.697881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.710861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.710876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.724040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.724054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.736699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.736714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.750020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.750035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.763005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.763020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.776240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.776255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.789244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.789258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.802533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.802548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.815359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.815373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.828712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.828727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.842434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.842449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.855863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.855877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.869081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.869096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.537 [2024-07-12 01:38:33.881941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.537 [2024-07-12 01:38:33.881956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.894917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.894933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.907760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.907775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.921293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.921308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.934448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.934464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.947706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.947722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.961282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.961297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.974318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.974332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:33.986988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:33.987002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.000712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.000728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.013571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.013586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.027192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.027207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.040240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.040256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.053054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.053069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.066248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.066263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.078809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.078824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.092166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.092182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.104949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.104963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.118630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.118644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.131932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.131947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.145317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.145332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.158687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.158702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.824 [2024-07-12 01:38:34.172055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.824 [2024-07-12 01:38:34.172070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.185579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.185595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.198945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.198960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.211579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.211594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.224038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.224053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.237300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.237315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.249878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.249892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.263326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.263341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.276406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.276421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.289371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.289386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.303068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.303083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.315568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.315583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.328914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.328930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.341696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.341711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.354572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.354587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.367472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.367487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.380917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.380932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.394094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.394109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.407393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.407409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.420781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.420796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-07-12 01:38:34.434380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-07-12 01:38:34.434395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.446666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.446681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.459894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.459910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.472768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.472782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.486321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.486336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.499732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.499747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.512405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.512420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.525113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.525128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.537564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.537579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.550841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.550856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.564436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.564451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.577119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.577133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.590150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.590165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.603439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.603455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.616717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.616732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.630118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.630133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.642722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.642737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.655251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.655265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.668520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.668535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.681367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.681381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.348 [2024-07-12 01:38:34.694019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.348 [2024-07-12 01:38:34.694037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.609 [2024-07-12 01:38:34.707556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.609 [2024-07-12 01:38:34.707572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.609 [2024-07-12 01:38:34.721082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.609 [2024-07-12 01:38:34.721097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.609 [2024-07-12 01:38:34.734060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.609 [2024-07-12 01:38:34.734075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.609 [2024-07-12 01:38:34.747618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.747634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.761076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.761091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.774353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.774368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.787772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.787787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.800338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.800353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.813676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.813690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.827531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.827546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.840572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.840587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.853819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.853834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.866270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.866284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.878596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.878611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.892053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.892067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.905139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.905153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.918235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.918250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.930697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.930711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.943400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.943418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.610 [2024-07-12 01:38:34.955940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.610 [2024-07-12 01:38:34.955955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:34.968771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:34.968785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:34.981781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:34.981796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:34.994755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:34.994770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.008084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.008100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.020469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.020484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.033099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.033113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.046341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.046356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.059499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.059514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.072439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.072454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.871 [2024-07-12 01:38:35.085655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.871 [2024-07-12 01:38:35.085670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.098900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.098915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.111754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.111769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.124560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.124575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.138129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.138144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.151402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.151416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.164884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.164898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.177278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.177293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.190489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.190507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.203538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.203553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.872 [2024-07-12 01:38:35.216831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.872 [2024-07-12 01:38:35.216846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.133 [2024-07-12 01:38:35.229705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.133 [2024-07-12 01:38:35.229720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.133 [2024-07-12 01:38:35.242570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.133 [2024-07-12 01:38:35.242584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.133 [2024-07-12 01:38:35.255696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.133 [2024-07-12 01:38:35.255711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.133 [2024-07-12 01:38:35.268127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.268142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.281420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.281435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.294553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.294567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.307199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.307213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.320713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.320728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.333717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.333732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.347192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.347207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.360550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.360565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.373940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.373955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.386593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.386607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.399298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.399312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.412529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.412543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.426234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.426248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.438925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.438945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.452307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.452321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.465183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.465198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.134 [2024-07-12 01:38:35.477636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.134 [2024-07-12 01:38:35.477650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.490454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.490468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.503685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.503700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.517088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.517103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.530137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.530151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.542567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.542582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.555068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.555083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.567494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.567508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.580557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.580572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.593587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.593602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.606799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.606813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.620284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.620299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.633944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.633959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.646830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.646845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.659385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.659400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.672886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.672901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.686250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.686270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.699558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.699574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.713297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.713312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.726072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.726088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.395 [2024-07-12 01:38:35.739711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.395 [2024-07-12 01:38:35.739726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.752965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.752980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.766067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.766082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.779332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.779347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.792524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.792539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.805504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.805519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.818214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.818233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.831043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.831059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.843925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.843940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.856969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.856984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.869321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.869336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.882961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.882977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.896577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.896592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.909368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.909382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.922606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.922620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.935425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.935440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.948098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.948113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.960857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.960872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.974251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.974266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:35.987967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:35.987982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.657 [2024-07-12 01:38:36.001054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.657 [2024-07-12 01:38:36.001069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.014256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.014272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.027005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.027020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.039776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.039791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.051909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.051924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.064461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.064476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.077764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.077779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.091148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.091163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.104295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.104309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.117533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.117548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.130711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.130726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.143844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.143859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.156696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.156710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.169725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.169739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.182785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.182801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.195810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.195825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.208813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.208828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.221178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.221193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.234009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.234024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.247208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.247223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.259963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.259978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.919 [2024-07-12 01:38:36.273203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.919 [2024-07-12 01:38:36.273219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.286449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.286464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.299548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.299564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.312323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.312338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.325023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.325038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.338131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.338146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.350896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.350911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.364272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.364287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.180 [2024-07-12 01:38:36.377342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.180 [2024-07-12 01:38:36.377358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.390802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.390816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.403669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.403683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.416384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.416398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.429469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.429484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.442327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.442342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.454804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.454818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.468241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.468256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.481380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.481394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.494318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.494332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.507632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.507646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.520501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.520515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.181 [2024-07-12 01:38:36.533301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.181 [2024-07-12 01:38:36.533316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.546372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.546387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.559450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.559464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.572663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.572677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.585805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.585820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.599086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.599100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.612004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.612018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.624952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.624967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.637709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.637724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.650584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.650599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.664117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.664134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.677035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.677050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.690347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.690362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.703621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.703636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.716653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.716668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.729918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.729932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.743018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.743033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.756160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.756175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.769365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.769379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.782785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.782799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.443 [2024-07-12 01:38:36.795651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.443 [2024-07-12 01:38:36.795665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.808174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.808188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.820961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.820975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.834504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.834518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.847270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.847284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.860408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.860422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.873427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.873442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.886416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.886431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.898927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.898942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.912262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.912280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.925585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.925599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.938591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.938606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.951632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.951647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.964911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.964926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.977438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.977452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:36.990299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:36.990313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:37.003218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:37.003237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:37.016269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:37.016284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:37.029390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:37.029405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:37.043023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:37.043038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.704 [2024-07-12 01:38:37.055410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.704 [2024-07-12 01:38:37.055425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.068466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.068481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.080657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.080672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.094389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.094403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.108006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.108020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.121103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.121118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.134744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.134758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.148239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.148253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.160985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.161003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.173495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.173510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.186440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.186454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.199473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.199487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.212549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.212564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 00:20:10.966 Latency(us) 00:20:10.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.966 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:10.966 Nvme1n1 : 5.01 19395.78 151.53 0.00 0.00 6592.82 2757.97 13871.79 00:20:10.966 =================================================================================================================== 00:20:10.966 Total : 19395.78 151.53 0.00 0.00 6592.82 2757.97 13871.79 00:20:10.966 [2024-07-12 01:38:37.221823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.221836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.233854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.233867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.245885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.245896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.257916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.257926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.269945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.269956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.281972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.966 [2024-07-12 01:38:37.281981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.966 [2024-07-12 01:38:37.294002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.967 [2024-07-12 01:38:37.294011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.967 [2024-07-12 01:38:37.306038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.967 [2024-07-12 01:38:37.306051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.967 [2024-07-12 01:38:37.318068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.967 [2024-07-12 01:38:37.318079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.228 [2024-07-12 01:38:37.330094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.228 [2024-07-12 01:38:37.330101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3973777) - No such process 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3973777 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:11.228 delay0 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.228 01:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:11.228 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.228 [2024-07-12 01:38:37.512430] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:17.813 [2024-07-12 01:38:43.567960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fe60 is same with the state(5) to be set 00:20:17.813 Initializing NVMe Controllers 00:20:17.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.813 Initialization complete. Launching workers. 00:20:17.813 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:20:17.813 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 31 00:20:17.813 success 178, unsuccess 218, failed 0 00:20:17.813 01:38:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.814 rmmod nvme_tcp 00:20:17.814 rmmod nvme_fabrics 00:20:17.814 rmmod nvme_keyring 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3971509 ']' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3971509 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3971509 ']' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3971509 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3971509 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3971509' 00:20:17.814 killing process with pid 3971509 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3971509 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3971509 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.814 01:38:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.726 01:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.726 00:20:19.726 real 0m34.015s 00:20:19.726 user 0m44.808s 00:20:19.726 sys 0m10.827s 00:20:19.726 01:38:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:19.726 01:38:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:19.726 ************************************ 00:20:19.726 END TEST nvmf_zcopy 00:20:19.726 ************************************ 00:20:19.726 01:38:45 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:19.726 01:38:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:19.726 01:38:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:19.726 01:38:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.726 ************************************ 00:20:19.726 START TEST nvmf_nmic 00:20:19.726 ************************************ 00:20:19.726 01:38:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:19.726 * Looking for test storage... 00:20:19.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.726 01:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.986 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.986 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.986 01:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.986 01:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:28.125 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:28.125 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:28.125 Found net devices under 0000:31:00.0: cvl_0_0 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.125 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:28.126 Found net devices under 0000:31:00.1: cvl_0_1 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:20:28.126 00:20:28.126 --- 10.0.0.2 ping statistics --- 00:20:28.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.126 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:20:28.126 00:20:28.126 --- 10.0.0.1 ping statistics --- 00:20:28.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.126 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3980759 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3980759 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3980759 ']' 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:28.126 01:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.126 [2024-07-12 01:38:53.964632] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:28.126 [2024-07-12 01:38:53.964699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.126 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.126 [2024-07-12 01:38:54.039407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.126 [2024-07-12 01:38:54.072085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.126 [2024-07-12 01:38:54.072125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.126 [2024-07-12 01:38:54.072133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.126 [2024-07-12 01:38:54.072140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.126 [2024-07-12 01:38:54.072147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.126 [2024-07-12 01:38:54.072292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.126 [2024-07-12 01:38:54.072490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.126 [2024-07-12 01:38:54.072491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.126 [2024-07-12 01:38:54.072341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.386 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:28.386 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:20:28.386 01:38:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.386 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.386 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 [2024-07-12 01:38:54.784914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 Malloc0 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 [2024-07-12 01:38:54.844348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:28.647 test case1: single bdev can't be used in multiple subsystems 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 [2024-07-12 01:38:54.880282] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:28.647 [2024-07-12 01:38:54.880299] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:28.647 [2024-07-12 01:38:54.880307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:28.647 request: 00:20:28.647 { 00:20:28.647 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:28.647 "namespace": { 00:20:28.647 "bdev_name": "Malloc0", 00:20:28.647 "no_auto_visible": false 00:20:28.647 }, 00:20:28.647 "method": "nvmf_subsystem_add_ns", 00:20:28.647 "req_id": 1 00:20:28.647 } 00:20:28.647 Got JSON-RPC error response 00:20:28.647 response: 00:20:28.647 { 00:20:28.647 "code": -32602, 00:20:28.647 "message": "Invalid parameters" 00:20:28.647 } 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:28.647 Adding namespace failed - expected result. 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:28.647 test case2: host connect to nvmf target in multiple paths 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 [2024-07-12 01:38:54.892419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.647 01:38:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:30.562 01:38:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:32.047 01:38:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:32.047 01:38:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:20:32.047 01:38:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:20:32.047 01:38:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:20:32.047 01:38:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:20:33.990 01:38:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:20:33.990 01:38:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:20:33.990 01:38:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:20:33.990 01:39:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:20:33.990 01:39:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:20:33.990 01:39:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:20:33.990 01:39:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:33.990 [global] 00:20:33.990 thread=1 00:20:33.990 invalidate=1 00:20:33.990 rw=write 00:20:33.990 time_based=1 00:20:33.990 runtime=1 00:20:33.990 ioengine=libaio 00:20:33.990 direct=1 00:20:33.990 bs=4096 00:20:33.990 iodepth=1 00:20:33.990 norandommap=0 00:20:33.990 numjobs=1 00:20:33.990 00:20:33.990 verify_dump=1 00:20:33.990 verify_backlog=512 00:20:33.990 verify_state_save=0 00:20:33.990 do_verify=1 00:20:33.990 verify=crc32c-intel 00:20:33.990 [job0] 00:20:33.990 filename=/dev/nvme0n1 00:20:33.990 Could not set queue depth (nvme0n1) 00:20:34.253 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:34.253 fio-3.35 00:20:34.253 Starting 1 thread 00:20:35.640 00:20:35.640 job0: (groupid=0, jobs=1): err= 0: pid=3982047: Fri Jul 12 01:39:01 2024 00:20:35.640 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:20:35.640 slat (nsec): min=24429, max=29774, avg=25883.90, stdev=1466.78 00:20:35.640 clat (usec): min=850, max=43033, avg=36147.18, stdev=14738.68 00:20:35.640 lat (usec): min=878, max=43058, avg=36173.06, stdev=14737.96 00:20:35.640 clat percentiles (usec): 00:20:35.640 | 1.00th=[ 848], 5.00th=[ 914], 10.00th=[ 1090], 20.00th=[41157], 00:20:35.640 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:35.640 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:20:35.640 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:35.640 | 99.99th=[43254] 00:20:35.640 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:20:35.640 slat (usec): min=9, max=25718, avg=76.55, stdev=1135.50 00:20:35.640 clat (usec): min=163, max=1102, avg=448.21, stdev=89.69 00:20:35.640 lat (usec): min=176, max=26108, avg=524.76, stdev=1136.73 00:20:35.640 clat percentiles (usec): 00:20:35.640 | 1.00th=[ 277], 5.00th=[ 318], 10.00th=[ 367], 20.00th=[ 388], 00:20:35.640 | 30.00th=[ 396], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 469], 00:20:35.640 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 603], 00:20:35.640 | 99.00th=[ 750], 99.50th=[ 824], 99.90th=[ 1106], 99.95th=[ 1106], 00:20:35.640 | 99.99th=[ 1106] 00:20:35.640 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:35.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:35.640 lat (usec) : 250=0.38%, 500=77.49%, 750=17.26%, 1000=1.13% 00:20:35.640 lat (msec) : 2=0.38%, 50=3.38% 00:20:35.640 cpu : usr=0.78%, sys=1.26%, ctx=537, majf=0, minf=1 00:20:35.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:35.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.640 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:35.640 00:20:35.640 Run status group 0 (all jobs): 00:20:35.640 READ: bw=81.4KiB/s (83.3kB/s), 81.4KiB/s-81.4KiB/s (83.3kB/s-83.3kB/s), io=84.0KiB (86.0kB), run=1032-1032msec 00:20:35.640 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:20:35.640 00:20:35.640 Disk stats (read/write): 00:20:35.640 nvme0n1: ios=42/512, merge=0/0, ticks=1559/182, in_queue=1741, util=98.70% 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:35.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.640 rmmod nvme_tcp 00:20:35.640 rmmod nvme_fabrics 00:20:35.640 rmmod nvme_keyring 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3980759 ']' 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3980759 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3980759 ']' 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3980759 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3980759 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3980759' 00:20:35.640 killing process with pid 3980759 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3980759 00:20:35.640 01:39:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3980759 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.901 01:39:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.819 01:39:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.819 00:20:37.819 real 0m18.117s 00:20:37.819 user 0m48.715s 00:20:37.819 sys 0m6.543s 00:20:37.819 01:39:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:37.819 01:39:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:37.819 ************************************ 00:20:37.819 END TEST nvmf_nmic 00:20:37.819 ************************************ 00:20:37.819 01:39:04 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:37.819 01:39:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:37.819 01:39:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:37.819 01:39:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.819 ************************************ 00:20:37.819 START TEST nvmf_fio_target 00:20:37.819 ************************************ 00:20:37.819 01:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:38.081 * Looking for test storage... 00:20:38.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:38.081 01:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.228 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.228 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.228 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:46.229 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:46.229 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:46.229 Found net devices under 0000:31:00.0: cvl_0_0 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:46.229 Found net devices under 0000:31:00.1: cvl_0_1 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:20:46.229 00:20:46.229 --- 10.0.0.2 ping statistics --- 00:20:46.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.229 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:20:46.229 00:20:46.229 --- 10.0.0.1 ping statistics --- 00:20:46.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.229 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3987573 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3987573 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3987573 ']' 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.229 01:39:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:46.229 [2024-07-12 01:39:12.392741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:46.229 [2024-07-12 01:39:12.392792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.229 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.229 [2024-07-12 01:39:12.468776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.229 [2024-07-12 01:39:12.503128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.229 [2024-07-12 01:39:12.503166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.230 [2024-07-12 01:39:12.503178] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.230 [2024-07-12 01:39:12.503185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.230 [2024-07-12 01:39:12.503190] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.230 [2024-07-12 01:39:12.503336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.230 [2024-07-12 01:39:12.503433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.230 [2024-07-12 01:39:12.503587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.230 [2024-07-12 01:39:12.503589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:47.175 [2024-07-12 01:39:13.347319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.175 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.435 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:47.435 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.435 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:47.435 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.697 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:47.697 01:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.959 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:47.959 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:47.959 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:48.220 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:48.220 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:48.481 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:48.481 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:48.481 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:48.481 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:48.742 01:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:49.003 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:49.003 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.003 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:49.003 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:49.264 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.264 [2024-07-12 01:39:15.608228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.524 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:49.524 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:49.785 01:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:51.172 01:39:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:51.172 01:39:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:20:51.172 01:39:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:20:51.172 01:39:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:20:51.172 01:39:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:20:51.172 01:39:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:20:53.083 01:39:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:20:53.083 01:39:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:20:53.083 01:39:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:20:53.342 01:39:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:20:53.342 01:39:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:20:53.343 01:39:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:20:53.343 01:39:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:53.343 [global] 00:20:53.343 thread=1 00:20:53.343 invalidate=1 00:20:53.343 rw=write 00:20:53.343 time_based=1 00:20:53.343 runtime=1 00:20:53.343 ioengine=libaio 00:20:53.343 direct=1 00:20:53.343 bs=4096 00:20:53.343 iodepth=1 00:20:53.343 norandommap=0 00:20:53.343 numjobs=1 00:20:53.343 00:20:53.343 verify_dump=1 00:20:53.343 verify_backlog=512 00:20:53.343 verify_state_save=0 00:20:53.343 do_verify=1 00:20:53.343 verify=crc32c-intel 00:20:53.343 [job0] 00:20:53.343 filename=/dev/nvme0n1 00:20:53.343 [job1] 00:20:53.343 filename=/dev/nvme0n2 00:20:53.343 [job2] 00:20:53.343 filename=/dev/nvme0n3 00:20:53.343 [job3] 00:20:53.343 filename=/dev/nvme0n4 00:20:53.343 Could not set queue depth (nvme0n1) 00:20:53.343 Could not set queue depth (nvme0n2) 00:20:53.343 Could not set queue depth (nvme0n3) 00:20:53.343 Could not set queue depth (nvme0n4) 00:20:53.603 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:53.603 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:53.603 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:53.603 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:53.603 fio-3.35 00:20:53.603 Starting 4 threads 00:20:54.986 00:20:54.986 job0: (groupid=0, jobs=1): err= 0: pid=3989255: Fri Jul 12 01:39:21 2024 00:20:54.986 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:54.986 slat (nsec): min=7393, max=58980, avg=24929.73, stdev=3489.65 00:20:54.986 clat (usec): min=665, max=1248, avg=932.55, stdev=84.27 00:20:54.986 lat (usec): min=690, max=1272, avg=957.48, stdev=84.46 00:20:54.986 clat percentiles (usec): 00:20:54.986 | 1.00th=[ 750], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 873], 00:20:54.986 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[ 922], 60.00th=[ 947], 00:20:54.986 | 70.00th=[ 963], 80.00th=[ 1004], 90.00th=[ 1045], 95.00th=[ 1074], 00:20:54.986 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1254], 99.95th=[ 1254], 00:20:54.986 | 99.99th=[ 1254] 00:20:54.986 write: IOPS=743, BW=2973KiB/s (3044kB/s)(2976KiB/1001msec); 0 zone resets 00:20:54.986 slat (nsec): min=9593, max=64652, avg=29633.45, stdev=9127.71 00:20:54.986 clat (usec): min=284, max=3170, avg=639.21, stdev=145.54 00:20:54.986 lat (usec): min=295, max=3205, avg=668.85, stdev=148.37 00:20:54.986 clat percentiles (usec): 00:20:54.987 | 1.00th=[ 371], 5.00th=[ 437], 10.00th=[ 490], 20.00th=[ 545], 00:20:54.987 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 668], 00:20:54.987 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 816], 00:20:54.987 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 3163], 99.95th=[ 3163], 00:20:54.987 | 99.99th=[ 3163] 00:20:54.987 bw ( KiB/s): min= 4096, max= 4096, per=46.71%, avg=4096.00, stdev= 0.00, samples=1 00:20:54.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:54.987 lat (usec) : 500=6.77%, 750=45.14%, 1000=39.41% 00:20:54.987 lat (msec) : 2=8.60%, 4=0.08% 00:20:54.987 cpu : usr=2.30%, sys=3.20%, ctx=1258, majf=0, minf=1 00:20:54.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 issued rwts: total=512,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.987 job1: (groupid=0, jobs=1): err= 0: pid=3989265: Fri Jul 12 01:39:21 2024 00:20:54.987 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:20:54.987 slat (nsec): min=26029, max=26851, avg=26311.56, stdev=221.07 00:20:54.987 clat (usec): min=1021, max=42527, avg=39441.80, stdev=10246.39 00:20:54.987 lat (usec): min=1048, max=42554, avg=39468.11, stdev=10246.39 00:20:54.987 clat percentiles (usec): 00:20:54.987 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41681], 20.00th=[41681], 00:20:54.987 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:54.987 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:20:54.987 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:54.987 | 99.99th=[42730] 00:20:54.987 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:20:54.987 slat (usec): min=9, max=2878, avg=41.45, stdev=149.75 00:20:54.987 clat (usec): min=310, max=932, avg=670.32, stdev=122.64 00:20:54.987 lat (usec): min=321, max=3513, avg=711.77, stdev=197.37 00:20:54.987 clat percentiles (usec): 00:20:54.987 | 1.00th=[ 383], 5.00th=[ 457], 10.00th=[ 502], 20.00th=[ 570], 00:20:54.987 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 709], 00:20:54.987 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 865], 00:20:54.987 | 99.00th=[ 914], 99.50th=[ 914], 99.90th=[ 930], 99.95th=[ 930], 00:20:54.987 | 99.99th=[ 930] 00:20:54.987 bw ( KiB/s): min= 4096, max= 4096, per=46.71%, avg=4096.00, stdev= 0.00, samples=1 00:20:54.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:54.987 lat (usec) : 500=9.66%, 750=59.66%, 1000=27.65% 00:20:54.987 lat (msec) : 2=0.19%, 50=2.84% 00:20:54.987 cpu : usr=1.00%, sys=2.10%, ctx=531, majf=0, minf=1 00:20:54.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.987 job2: (groupid=0, jobs=1): err= 0: pid=3989283: Fri Jul 12 01:39:21 2024 00:20:54.987 read: IOPS=15, BW=61.8KiB/s (63.3kB/s)(64.0KiB/1036msec) 00:20:54.987 slat (nsec): min=9738, max=25259, avg=23860.63, stdev=3769.59 00:20:54.987 clat (usec): min=41832, max=42870, avg=42016.56, stdev=237.39 00:20:54.987 lat (usec): min=41857, max=42880, avg=42040.42, stdev=233.81 00:20:54.987 clat percentiles (usec): 00:20:54.987 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:54.987 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:54.987 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:20:54.987 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:54.987 | 99.99th=[42730] 00:20:54.987 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:20:54.987 slat (nsec): min=9229, max=52423, avg=29365.94, stdev=8912.86 00:20:54.987 clat (usec): min=265, max=1018, avg=674.00, stdev=123.87 00:20:54.987 lat (usec): min=278, max=1051, avg=703.36, stdev=127.51 00:20:54.987 clat percentiles (usec): 00:20:54.987 | 1.00th=[ 375], 5.00th=[ 445], 10.00th=[ 519], 20.00th=[ 570], 00:20:54.987 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:20:54.987 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 824], 95.00th=[ 857], 00:20:54.987 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1020], 99.95th=[ 1020], 00:20:54.987 | 99.99th=[ 1020] 00:20:54.987 bw ( KiB/s): min= 4096, max= 4096, per=46.71%, avg=4096.00, stdev= 0.00, samples=1 00:20:54.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:54.987 lat (usec) : 500=7.20%, 750=63.26%, 1000=26.33% 00:20:54.987 lat (msec) : 2=0.19%, 50=3.03% 00:20:54.987 cpu : usr=0.87%, sys=1.45%, ctx=528, majf=0, minf=1 00:20:54.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.987 job3: (groupid=0, jobs=1): err= 0: pid=3989290: Fri Jul 12 01:39:21 2024 00:20:54.987 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1040msec) 00:20:54.987 slat (nsec): min=26557, max=27521, avg=26907.71, stdev=270.86 00:20:54.987 clat (usec): min=1180, max=42955, avg=39619.12, stdev=9908.30 00:20:54.987 lat (usec): min=1207, max=42982, avg=39646.02, stdev=9908.37 00:20:54.987 clat percentiles (usec): 00:20:54.987 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:20:54.987 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:54.987 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:20:54.987 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:54.987 | 99.99th=[42730] 00:20:54.987 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:20:54.987 slat (nsec): min=9282, max=68309, avg=31235.21, stdev=10125.09 00:20:54.987 clat (usec): min=277, max=3289, avg=671.99, stdev=166.64 00:20:54.987 lat (usec): min=288, max=3327, avg=703.22, stdev=170.70 00:20:54.987 clat percentiles (usec): 00:20:54.987 | 1.00th=[ 388], 5.00th=[ 441], 10.00th=[ 498], 20.00th=[ 562], 00:20:54.987 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 709], 00:20:54.987 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 832], 00:20:54.987 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 3294], 99.95th=[ 3294], 00:20:54.987 | 99.99th=[ 3294] 00:20:54.987 bw ( KiB/s): min= 4096, max= 4096, per=46.71%, avg=4096.00, stdev= 0.00, samples=1 00:20:54.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:54.987 lat (usec) : 500=10.02%, 750=60.49%, 1000=26.09% 00:20:54.987 lat (msec) : 2=0.19%, 4=0.19%, 50=3.02% 00:20:54.987 cpu : usr=1.15%, sys=1.73%, ctx=531, majf=0, minf=1 00:20:54.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.987 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.987 00:20:54.987 Run status group 0 (all jobs): 00:20:54.987 READ: bw=2158KiB/s (2209kB/s), 61.8KiB/s-2046KiB/s (63.3kB/s-2095kB/s), io=2244KiB (2298kB), run=1001-1040msec 00:20:54.987 WRITE: bw=8769KiB/s (8980kB/s), 1969KiB/s-2973KiB/s (2016kB/s-3044kB/s), io=9120KiB (9339kB), run=1001-1040msec 00:20:54.987 00:20:54.987 Disk stats (read/write): 00:20:54.987 nvme0n1: ios=519/512, merge=0/0, ticks=1277/314, in_queue=1591, util=83.97% 00:20:54.987 nvme0n2: ios=63/512, merge=0/0, ticks=703/265, in_queue=968, util=90.71% 00:20:54.987 nvme0n3: ios=68/512, merge=0/0, ticks=536/318, in_queue=854, util=93.87% 00:20:54.987 nvme0n4: ios=69/512, merge=0/0, ticks=1074/274, in_queue=1348, util=94.01% 00:20:54.987 01:39:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:54.987 [global] 00:20:54.987 thread=1 00:20:54.987 invalidate=1 00:20:54.987 rw=randwrite 00:20:54.987 time_based=1 00:20:54.987 runtime=1 00:20:54.987 ioengine=libaio 00:20:54.987 direct=1 00:20:54.987 bs=4096 00:20:54.987 iodepth=1 00:20:54.987 norandommap=0 00:20:54.987 numjobs=1 00:20:54.987 00:20:54.987 verify_dump=1 00:20:54.987 verify_backlog=512 00:20:54.987 verify_state_save=0 00:20:54.987 do_verify=1 00:20:54.987 verify=crc32c-intel 00:20:54.987 [job0] 00:20:54.987 filename=/dev/nvme0n1 00:20:54.987 [job1] 00:20:54.987 filename=/dev/nvme0n2 00:20:54.987 [job2] 00:20:54.987 filename=/dev/nvme0n3 00:20:54.987 [job3] 00:20:54.987 filename=/dev/nvme0n4 00:20:54.987 Could not set queue depth (nvme0n1) 00:20:54.987 Could not set queue depth (nvme0n2) 00:20:54.987 Could not set queue depth (nvme0n3) 00:20:54.987 Could not set queue depth (nvme0n4) 00:20:55.247 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.247 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.247 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.247 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.247 fio-3.35 00:20:55.247 Starting 4 threads 00:20:56.629 00:20:56.629 job0: (groupid=0, jobs=1): err= 0: pid=3989748: Fri Jul 12 01:39:22 2024 00:20:56.629 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:56.629 slat (nsec): min=23971, max=58943, avg=25060.21, stdev=3082.48 00:20:56.629 clat (usec): min=544, max=1294, avg=1057.94, stdev=106.38 00:20:56.629 lat (usec): min=569, max=1318, avg=1083.00, stdev=106.30 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 922], 20.00th=[ 988], 00:20:56.629 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:20:56.629 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:20:56.629 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:20:56.629 | 99.99th=[ 1303] 00:20:56.629 write: IOPS=626, BW=2505KiB/s (2566kB/s)(2508KiB/1001msec); 0 zone resets 00:20:56.629 slat (nsec): min=9156, max=90409, avg=28354.77, stdev=8061.36 00:20:56.629 clat (usec): min=304, max=981, avg=667.38, stdev=121.82 00:20:56.629 lat (usec): min=315, max=1010, avg=695.73, stdev=124.13 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 343], 5.00th=[ 449], 10.00th=[ 490], 20.00th=[ 562], 00:20:56.629 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 709], 00:20:56.629 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 840], 00:20:56.629 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 979], 99.95th=[ 979], 00:20:56.629 | 99.99th=[ 979] 00:20:56.629 bw ( KiB/s): min= 4096, max= 4096, per=44.07%, avg=4096.00, stdev= 0.00, samples=1 00:20:56.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:56.629 lat (usec) : 500=5.97%, 750=35.21%, 1000=24.50% 00:20:56.629 lat (msec) : 2=34.33% 00:20:56.629 cpu : usr=1.80%, sys=3.10%, ctx=1140, majf=0, minf=1 00:20:56.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 issued rwts: total=512,627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.629 job1: (groupid=0, jobs=1): err= 0: pid=3989768: Fri Jul 12 01:39:22 2024 00:20:56.629 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:56.629 slat (nsec): min=7863, max=43182, avg=24950.51, stdev=2861.07 00:20:56.629 clat (usec): min=792, max=1605, avg=1148.97, stdev=121.53 00:20:56.629 lat (usec): min=817, max=1630, avg=1173.92, stdev=121.67 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 914], 5.00th=[ 979], 10.00th=[ 1020], 20.00th=[ 1057], 00:20:56.629 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1156], 00:20:56.629 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1336], 95.00th=[ 1385], 00:20:56.629 | 99.00th=[ 1483], 99.50th=[ 1549], 99.90th=[ 1614], 99.95th=[ 1614], 00:20:56.629 | 99.99th=[ 1614] 00:20:56.629 write: IOPS=607, BW=2430KiB/s (2488kB/s)(2432KiB/1001msec); 0 zone resets 00:20:56.629 slat (nsec): min=3323, max=64145, avg=24356.18, stdev=10027.92 00:20:56.629 clat (usec): min=146, max=1213, avg=619.21, stdev=141.06 00:20:56.629 lat (usec): min=156, max=1219, avg=643.56, stdev=145.12 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 293], 5.00th=[ 392], 10.00th=[ 424], 20.00th=[ 506], 00:20:56.629 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 635], 60.00th=[ 668], 00:20:56.629 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 832], 00:20:56.629 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1221], 99.95th=[ 1221], 00:20:56.629 | 99.99th=[ 1221] 00:20:56.629 bw ( KiB/s): min= 4096, max= 4096, per=44.07%, avg=4096.00, stdev= 0.00, samples=1 00:20:56.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:56.629 lat (usec) : 250=0.18%, 500=10.18%, 750=34.73%, 1000=12.50% 00:20:56.629 lat (msec) : 2=42.41% 00:20:56.629 cpu : usr=1.40%, sys=3.10%, ctx=1120, majf=0, minf=1 00:20:56.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 issued rwts: total=512,608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.629 job2: (groupid=0, jobs=1): err= 0: pid=3989787: Fri Jul 12 01:39:22 2024 00:20:56.629 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:56.629 slat (nsec): min=8138, max=59763, avg=25849.33, stdev=3612.78 00:20:56.629 clat (usec): min=725, max=1446, avg=1133.37, stdev=106.36 00:20:56.629 lat (usec): min=750, max=1472, avg=1159.22, stdev=106.21 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 832], 5.00th=[ 930], 10.00th=[ 996], 20.00th=[ 1057], 00:20:56.629 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1172], 00:20:56.629 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:20:56.629 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1450], 99.95th=[ 1450], 00:20:56.629 | 99.99th=[ 1450] 00:20:56.629 write: IOPS=643, BW=2573KiB/s (2635kB/s)(2576KiB/1001msec); 0 zone resets 00:20:56.629 slat (nsec): min=9723, max=53709, avg=29390.17, stdev=8660.21 00:20:56.629 clat (usec): min=244, max=1044, avg=587.19, stdev=131.36 00:20:56.629 lat (usec): min=270, max=1076, avg=616.58, stdev=134.51 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 289], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 482], 00:20:56.629 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 627], 00:20:56.629 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 791], 00:20:56.629 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 1045], 99.95th=[ 1045], 00:20:56.629 | 99.99th=[ 1045] 00:20:56.629 bw ( KiB/s): min= 4096, max= 4096, per=44.07%, avg=4096.00, stdev= 0.00, samples=1 00:20:56.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:56.629 lat (usec) : 250=0.09%, 500=13.32%, 750=37.46%, 1000=9.43% 00:20:56.629 lat (msec) : 2=39.71% 00:20:56.629 cpu : usr=2.30%, sys=2.80%, ctx=1159, majf=0, minf=1 00:20:56.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 issued rwts: total=512,644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.629 job3: (groupid=0, jobs=1): err= 0: pid=3989794: Fri Jul 12 01:39:22 2024 00:20:56.629 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:20:56.629 slat (nsec): min=12988, max=28191, avg=26571.47, stdev=3513.41 00:20:56.629 clat (usec): min=1274, max=42990, avg=39734.48, stdev=9917.88 00:20:56.629 lat (usec): min=1301, max=43017, avg=39761.05, stdev=9917.71 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[41681], 20.00th=[41681], 00:20:56.629 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:56.629 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:20:56.629 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:56.629 | 99.99th=[42730] 00:20:56.629 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:20:56.629 slat (nsec): min=9162, max=71647, avg=30445.79, stdev=10647.39 00:20:56.629 clat (usec): min=374, max=927, avg=650.41, stdev=107.18 00:20:56.629 lat (usec): min=384, max=961, avg=680.86, stdev=112.68 00:20:56.629 clat percentiles (usec): 00:20:56.629 | 1.00th=[ 400], 5.00th=[ 437], 10.00th=[ 515], 20.00th=[ 562], 00:20:56.629 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:20:56.629 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 816], 00:20:56.629 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 930], 00:20:56.629 | 99.99th=[ 930] 00:20:56.629 bw ( KiB/s): min= 4096, max= 4096, per=44.07%, avg=4096.00, stdev= 0.00, samples=1 00:20:56.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:56.629 lat (usec) : 500=7.37%, 750=68.62%, 1000=20.79% 00:20:56.629 lat (msec) : 2=0.19%, 50=3.02% 00:20:56.629 cpu : usr=0.97%, sys=2.04%, ctx=531, majf=0, minf=1 00:20:56.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.629 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:56.629 00:20:56.629 Run status group 0 (all jobs): 00:20:56.629 READ: bw=6037KiB/s (6182kB/s), 66.1KiB/s-2046KiB/s (67.7kB/s-2095kB/s), io=6212KiB (6361kB), run=1001-1029msec 00:20:56.629 WRITE: bw=9294KiB/s (9518kB/s), 1990KiB/s-2573KiB/s (2038kB/s-2635kB/s), io=9564KiB (9794kB), run=1001-1029msec 00:20:56.629 00:20:56.629 Disk stats (read/write): 00:20:56.629 nvme0n1: ios=485/512, merge=0/0, ticks=488/332, in_queue=820, util=86.37% 00:20:56.629 nvme0n2: ios=480/512, merge=0/0, ticks=592/302, in_queue=894, util=92.34% 00:20:56.629 nvme0n3: ios=491/512, merge=0/0, ticks=621/288, in_queue=909, util=100.00% 00:20:56.629 nvme0n4: ios=54/512, merge=0/0, ticks=1409/270, in_queue=1679, util=99.68% 00:20:56.629 01:39:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:56.629 [global] 00:20:56.629 thread=1 00:20:56.629 invalidate=1 00:20:56.629 rw=write 00:20:56.629 time_based=1 00:20:56.629 runtime=1 00:20:56.629 ioengine=libaio 00:20:56.629 direct=1 00:20:56.629 bs=4096 00:20:56.629 iodepth=128 00:20:56.629 norandommap=0 00:20:56.629 numjobs=1 00:20:56.629 00:20:56.629 verify_dump=1 00:20:56.629 verify_backlog=512 00:20:56.629 verify_state_save=0 00:20:56.629 do_verify=1 00:20:56.629 verify=crc32c-intel 00:20:56.629 [job0] 00:20:56.629 filename=/dev/nvme0n1 00:20:56.629 [job1] 00:20:56.629 filename=/dev/nvme0n2 00:20:56.629 [job2] 00:20:56.629 filename=/dev/nvme0n3 00:20:56.629 [job3] 00:20:56.629 filename=/dev/nvme0n4 00:20:56.629 Could not set queue depth (nvme0n1) 00:20:56.629 Could not set queue depth (nvme0n2) 00:20:56.629 Could not set queue depth (nvme0n3) 00:20:56.630 Could not set queue depth (nvme0n4) 00:20:56.889 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.889 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.889 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.889 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.889 fio-3.35 00:20:56.889 Starting 4 threads 00:20:58.267 00:20:58.267 job0: (groupid=0, jobs=1): err= 0: pid=3990233: Fri Jul 12 01:39:24 2024 00:20:58.267 read: IOPS=3970, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1006msec) 00:20:58.267 slat (nsec): min=892, max=22603k, avg=127794.33, stdev=889954.21 00:20:58.267 clat (usec): min=3064, max=69713, avg=16524.09, stdev=17126.60 00:20:58.267 lat (usec): min=3068, max=69719, avg=16651.88, stdev=17249.72 00:20:58.267 clat percentiles (usec): 00:20:58.267 | 1.00th=[ 3916], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 7701], 00:20:58.267 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[11207], 00:20:58.267 | 70.00th=[11863], 80.00th=[14615], 90.00th=[52167], 95.00th=[62129], 00:20:58.267 | 99.00th=[66847], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:20:58.267 | 99.99th=[69731] 00:20:58.267 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:20:58.267 slat (nsec): min=1557, max=15034k, avg=114501.27, stdev=839139.73 00:20:58.267 clat (usec): min=2961, max=60262, avg=15003.24, stdev=11559.92 00:20:58.267 lat (usec): min=2972, max=60270, avg=15117.74, stdev=11616.60 00:20:58.267 clat percentiles (usec): 00:20:58.267 | 1.00th=[ 3097], 5.00th=[ 4228], 10.00th=[ 6063], 20.00th=[ 6980], 00:20:58.268 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[12125], 00:20:58.268 | 70.00th=[16188], 80.00th=[22414], 90.00th=[33424], 95.00th=[42206], 00:20:58.268 | 99.00th=[52691], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:20:58.268 | 99.99th=[60031] 00:20:58.268 bw ( KiB/s): min=12288, max=20480, per=17.86%, avg=16384.00, stdev=5792.62, samples=2 00:20:58.268 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:20:58.268 lat (msec) : 4=2.22%, 10=49.16%, 20=29.12%, 50=13.28%, 100=6.22% 00:20:58.268 cpu : usr=2.79%, sys=3.18%, ctx=353, majf=0, minf=1 00:20:58.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:58.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.268 issued rwts: total=3994,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.268 job1: (groupid=0, jobs=1): err= 0: pid=3990237: Fri Jul 12 01:39:24 2024 00:20:58.268 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:20:58.268 slat (nsec): min=898, max=7814.2k, avg=67282.77, stdev=464199.11 00:20:58.268 clat (usec): min=1600, max=27429, avg=9214.99, stdev=3264.10 00:20:58.268 lat (usec): min=1639, max=27437, avg=9282.27, stdev=3292.62 00:20:58.268 clat percentiles (usec): 00:20:58.268 | 1.00th=[ 3818], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 6980], 00:20:58.268 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9110], 00:20:58.268 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12125], 95.00th=[13566], 00:20:58.268 | 99.00th=[22414], 99.50th=[24773], 99.90th=[27395], 99.95th=[27395], 00:20:58.268 | 99.99th=[27395] 00:20:58.268 write: IOPS=7216, BW=28.2MiB/s (29.6MB/s)(28.3MiB/1004msec); 0 zone resets 00:20:58.268 slat (nsec): min=1624, max=15390k, avg=65057.58, stdev=526291.10 00:20:58.268 clat (usec): min=1085, max=37471, avg=8455.87, stdev=4529.87 00:20:58.268 lat (usec): min=1095, max=37504, avg=8520.93, stdev=4558.19 00:20:58.268 clat percentiles (usec): 00:20:58.268 | 1.00th=[ 3654], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5407], 00:20:58.268 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7767], 00:20:58.268 | 70.00th=[ 8455], 80.00th=[10552], 90.00th=[13960], 95.00th=[15926], 00:20:58.268 | 99.00th=[28443], 99.50th=[31589], 99.90th=[32375], 99.95th=[33424], 00:20:58.268 | 99.99th=[37487] 00:20:58.268 bw ( KiB/s): min=26672, max=30728, per=31.28%, avg=28700.00, stdev=2868.03, samples=2 00:20:58.268 iops : min= 6668, max= 7682, avg=7175.00, stdev=717.01, samples=2 00:20:58.268 lat (msec) : 2=0.20%, 4=1.67%, 10=73.85%, 20=21.47%, 50=2.80% 00:20:58.268 cpu : usr=4.89%, sys=7.18%, ctx=403, majf=0, minf=1 00:20:58.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:58.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.268 issued rwts: total=7168,7245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.268 job2: (groupid=0, jobs=1): err= 0: pid=3990247: Fri Jul 12 01:39:24 2024 00:20:58.268 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:20:58.268 slat (nsec): min=951, max=21755k, avg=110387.99, stdev=886574.75 00:20:58.268 clat (usec): min=3052, max=61672, avg=14353.68, stdev=8668.77 00:20:58.268 lat (usec): min=3064, max=61696, avg=14464.07, stdev=8739.84 00:20:58.268 clat percentiles (usec): 00:20:58.268 | 1.00th=[ 5014], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[ 9765], 00:20:58.268 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11994], 00:20:58.268 | 70.00th=[13829], 80.00th=[16057], 90.00th=[25822], 95.00th=[34866], 00:20:58.268 | 99.00th=[46400], 99.50th=[50594], 99.90th=[50594], 99.95th=[57934], 00:20:58.268 | 99.99th=[61604] 00:20:58.268 write: IOPS=5185, BW=20.3MiB/s (21.2MB/s)(20.4MiB/1006msec); 0 zone resets 00:20:58.268 slat (nsec): min=1674, max=10360k, avg=75196.31, stdev=556332.11 00:20:58.268 clat (usec): min=569, max=32010, avg=10362.61, stdev=3416.18 00:20:58.268 lat (usec): min=610, max=32021, avg=10437.81, stdev=3435.48 00:20:58.268 clat percentiles (usec): 00:20:58.268 | 1.00th=[ 3458], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7308], 00:20:58.268 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10421], 60.00th=[10814], 00:20:58.268 | 70.00th=[11600], 80.00th=[12780], 90.00th=[14484], 95.00th=[17433], 00:20:58.268 | 99.00th=[19530], 99.50th=[19792], 99.90th=[23200], 99.95th=[23200], 00:20:58.268 | 99.99th=[32113] 00:20:58.268 bw ( KiB/s): min=17368, max=23679, per=22.37%, avg=20523.50, stdev=4462.55, samples=2 00:20:58.268 iops : min= 4342, max= 5919, avg=5130.50, stdev=1115.11, samples=2 00:20:58.268 lat (usec) : 750=0.01%, 1000=0.01% 00:20:58.268 lat (msec) : 2=0.11%, 4=1.00%, 10=33.63%, 20=58.90%, 50=5.91% 00:20:58.268 lat (msec) : 100=0.45% 00:20:58.268 cpu : usr=3.48%, sys=5.47%, ctx=344, majf=0, minf=1 00:20:58.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:58.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.268 issued rwts: total=5120,5217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.268 job3: (groupid=0, jobs=1): err= 0: pid=3990254: Fri Jul 12 01:39:24 2024 00:20:58.268 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:20:58.268 slat (nsec): min=915, max=17497k, avg=75918.69, stdev=703452.05 00:20:58.268 clat (usec): min=2134, max=57025, avg=10645.12, stdev=9253.84 00:20:58.268 lat (usec): min=2161, max=57034, avg=10721.04, stdev=9317.74 00:20:58.268 clat percentiles (usec): 00:20:58.268 | 1.00th=[ 3064], 5.00th=[ 4883], 10.00th=[ 5538], 20.00th=[ 6915], 00:20:58.268 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8356], 00:20:58.268 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[15533], 95.00th=[34866], 00:20:58.268 | 99.00th=[56361], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:20:58.268 | 99.99th=[56886] 00:20:58.268 write: IOPS=6480, BW=25.3MiB/s (26.5MB/s)(25.5MiB/1006msec); 0 zone resets 00:20:58.268 slat (nsec): min=1651, max=14277k, avg=62741.83, stdev=516864.14 00:20:58.268 clat (usec): min=1258, max=70605, avg=9542.58, stdev=8725.33 00:20:58.268 lat (usec): min=1268, max=70613, avg=9605.32, stdev=8768.69 00:20:58.268 clat percentiles (usec): 00:20:58.268 | 1.00th=[ 2540], 5.00th=[ 3720], 10.00th=[ 4228], 20.00th=[ 4752], 00:20:58.268 | 30.00th=[ 5473], 40.00th=[ 6587], 50.00th=[ 7504], 60.00th=[ 7832], 00:20:58.268 | 70.00th=[ 8848], 80.00th=[10814], 90.00th=[17695], 95.00th=[25560], 00:20:58.268 | 99.00th=[55313], 99.50th=[62653], 99.90th=[68682], 99.95th=[70779], 00:20:58.268 | 99.99th=[70779] 00:20:58.268 bw ( KiB/s): min=22472, max=28664, per=27.86%, avg=25568.00, stdev=4378.41, samples=2 00:20:58.268 iops : min= 5618, max= 7166, avg=6392.00, stdev=1094.60, samples=2 00:20:58.268 lat (msec) : 2=0.21%, 4=4.53%, 10=71.03%, 20=17.18%, 50=5.33% 00:20:58.268 lat (msec) : 100=1.72% 00:20:58.268 cpu : usr=5.27%, sys=6.07%, ctx=392, majf=0, minf=1 00:20:58.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:58.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.268 issued rwts: total=6144,6519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.268 00:20:58.268 Run status group 0 (all jobs): 00:20:58.268 READ: bw=87.1MiB/s (91.3MB/s), 15.5MiB/s-27.9MiB/s (16.3MB/s-29.2MB/s), io=87.6MiB (91.9MB), run=1004-1006msec 00:20:58.268 WRITE: bw=89.6MiB/s (94.0MB/s), 15.9MiB/s-28.2MiB/s (16.7MB/s-29.6MB/s), io=90.1MiB (94.5MB), run=1004-1006msec 00:20:58.268 00:20:58.268 Disk stats (read/write): 00:20:58.268 nvme0n1: ios=2668/3072, merge=0/0, ticks=17918/18890, in_queue=36808, util=94.19% 00:20:58.268 nvme0n2: ios=5666/6052, merge=0/0, ticks=40211/34836, in_queue=75047, util=96.43% 00:20:58.268 nvme0n3: ios=4659/4904, merge=0/0, ticks=47460/43865, in_queue=91325, util=98.95% 00:20:58.268 nvme0n4: ios=5803/6144, merge=0/0, ticks=40676/39277, in_queue=79953, util=99.79% 00:20:58.268 01:39:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:58.268 [global] 00:20:58.268 thread=1 00:20:58.268 invalidate=1 00:20:58.268 rw=randwrite 00:20:58.268 time_based=1 00:20:58.268 runtime=1 00:20:58.268 ioengine=libaio 00:20:58.268 direct=1 00:20:58.268 bs=4096 00:20:58.268 iodepth=128 00:20:58.268 norandommap=0 00:20:58.268 numjobs=1 00:20:58.268 00:20:58.268 verify_dump=1 00:20:58.268 verify_backlog=512 00:20:58.268 verify_state_save=0 00:20:58.268 do_verify=1 00:20:58.268 verify=crc32c-intel 00:20:58.268 [job0] 00:20:58.268 filename=/dev/nvme0n1 00:20:58.268 [job1] 00:20:58.268 filename=/dev/nvme0n2 00:20:58.268 [job2] 00:20:58.268 filename=/dev/nvme0n3 00:20:58.268 [job3] 00:20:58.268 filename=/dev/nvme0n4 00:20:58.268 Could not set queue depth (nvme0n1) 00:20:58.268 Could not set queue depth (nvme0n2) 00:20:58.268 Could not set queue depth (nvme0n3) 00:20:58.268 Could not set queue depth (nvme0n4) 00:20:58.838 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:58.838 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:58.838 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:58.838 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:58.838 fio-3.35 00:20:58.838 Starting 4 threads 00:20:59.776 00:20:59.776 job0: (groupid=0, jobs=1): err= 0: pid=3990739: Fri Jul 12 01:39:26 2024 00:20:59.776 read: IOPS=3978, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1004msec) 00:20:59.776 slat (nsec): min=913, max=35528k, avg=126151.58, stdev=1098462.59 00:20:59.776 clat (usec): min=984, max=86342, avg=15139.11, stdev=12244.93 00:20:59.776 lat (usec): min=4645, max=86350, avg=15265.26, stdev=12368.35 00:20:59.776 clat percentiles (usec): 00:20:59.776 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 8455], 00:20:59.776 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11338], 60.00th=[11863], 00:20:59.776 | 70.00th=[12911], 80.00th=[15008], 90.00th=[34866], 95.00th=[43254], 00:20:59.776 | 99.00th=[66847], 99.50th=[66847], 99.90th=[72877], 99.95th=[86508], 00:20:59.776 | 99.99th=[86508] 00:20:59.776 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:20:59.776 slat (nsec): min=1493, max=29231k, avg=117796.12, stdev=771020.63 00:20:59.776 clat (usec): min=3909, max=86321, avg=16265.83, stdev=10299.26 00:20:59.776 lat (usec): min=3918, max=86330, avg=16383.62, stdev=10375.25 00:20:59.776 clat percentiles (usec): 00:20:59.776 | 1.00th=[ 6128], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7504], 00:20:59.776 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[13566], 60.00th=[17433], 00:20:59.776 | 70.00th=[19530], 80.00th=[21890], 90.00th=[26346], 95.00th=[39584], 00:20:59.776 | 99.00th=[51119], 99.50th=[51119], 99.90th=[69731], 99.95th=[69731], 00:20:59.776 | 99.99th=[86508] 00:20:59.776 bw ( KiB/s): min=12288, max=20480, per=20.87%, avg=16384.00, stdev=5792.62, samples=2 00:20:59.776 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:20:59.776 lat (usec) : 1000=0.01% 00:20:59.776 lat (msec) : 4=0.05%, 10=34.45%, 20=43.49%, 50=19.57%, 100=2.44% 00:20:59.776 cpu : usr=1.99%, sys=3.19%, ctx=484, majf=0, minf=1 00:20:59.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:59.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.776 issued rwts: total=3994,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.776 job1: (groupid=0, jobs=1): err= 0: pid=3990747: Fri Jul 12 01:39:26 2024 00:20:59.776 read: IOPS=3701, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1011msec) 00:20:59.776 slat (nsec): min=901, max=16061k, avg=102918.70, stdev=857627.64 00:20:59.776 clat (usec): min=1013, max=40785, avg=13749.41, stdev=6803.40 00:20:59.776 lat (usec): min=1025, max=41001, avg=13852.33, stdev=6878.48 00:20:59.776 clat percentiles (usec): 00:20:59.776 | 1.00th=[ 1713], 5.00th=[ 2409], 10.00th=[ 4817], 20.00th=[ 8455], 00:20:59.776 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[12780], 60.00th=[17171], 00:20:59.776 | 70.00th=[18220], 80.00th=[20055], 90.00th=[21890], 95.00th=[24511], 00:20:59.776 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31589], 99.95th=[36439], 00:20:59.776 | 99.99th=[40633] 00:20:59.776 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:20:59.776 slat (nsec): min=1595, max=16178k, avg=140589.74, stdev=950936.91 00:20:59.776 clat (usec): min=385, max=117676, avg=18588.99, stdev=21229.23 00:20:59.776 lat (usec): min=415, max=117684, avg=18729.58, stdev=21360.26 00:20:59.776 clat percentiles (usec): 00:20:59.776 | 1.00th=[ 1106], 5.00th=[ 2008], 10.00th=[ 2999], 20.00th=[ 5342], 00:20:59.776 | 30.00th=[ 6980], 40.00th=[ 8717], 50.00th=[ 11338], 60.00th=[ 15926], 00:20:59.776 | 70.00th=[ 18744], 80.00th=[ 25297], 90.00th=[ 39584], 95.00th=[ 69731], 00:20:59.776 | 99.00th=[108528], 99.50th=[109577], 99.90th=[117965], 99.95th=[117965], 00:20:59.776 | 99.99th=[117965] 00:20:59.776 bw ( KiB/s): min=12288, max=20480, per=20.87%, avg=16384.00, stdev=5792.62, samples=2 00:20:59.776 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:20:59.776 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.33% 00:20:59.776 lat (msec) : 2=3.67%, 4=7.08%, 10=32.56%, 20=28.80%, 50=23.35% 00:20:59.776 lat (msec) : 100=3.18%, 250=0.97% 00:20:59.776 cpu : usr=3.37%, sys=3.56%, ctx=343, majf=0, minf=1 00:20:59.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:59.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.776 issued rwts: total=3742,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.776 job2: (groupid=0, jobs=1): err= 0: pid=3990756: Fri Jul 12 01:39:26 2024 00:20:59.776 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:20:59.776 slat (nsec): min=922, max=38541k, avg=75326.96, stdev=625213.23 00:20:59.777 clat (usec): min=4802, max=46621, avg=9458.32, stdev=5363.63 00:20:59.777 lat (usec): min=4805, max=69603, avg=9533.65, stdev=5421.15 00:20:59.777 clat percentiles (usec): 00:20:59.777 | 1.00th=[ 5080], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 6980], 00:20:59.777 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 8979], 00:20:59.777 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11863], 95.00th=[15008], 00:20:59.777 | 99.00th=[43779], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:20:59.777 | 99.99th=[46400] 00:20:59.777 write: IOPS=7016, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1003msec); 0 zone resets 00:20:59.777 slat (nsec): min=1545, max=6614.4k, avg=66181.46, stdev=351795.81 00:20:59.777 clat (usec): min=1463, max=24449, avg=9074.87, stdev=3272.07 00:20:59.777 lat (usec): min=3684, max=24451, avg=9141.05, stdev=3295.72 00:20:59.777 clat percentiles (usec): 00:20:59.777 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6718], 00:20:59.777 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8356], 00:20:59.777 | 70.00th=[ 9503], 80.00th=[11994], 90.00th=[13698], 95.00th=[14877], 00:20:59.777 | 99.00th=[21365], 99.50th=[23725], 99.90th=[24249], 99.95th=[24511], 00:20:59.777 | 99.99th=[24511] 00:20:59.777 bw ( KiB/s): min=22520, max=32768, per=35.22%, avg=27644.00, stdev=7246.43, samples=2 00:20:59.777 iops : min= 5630, max= 8192, avg=6911.00, stdev=1811.61, samples=2 00:20:59.777 lat (msec) : 2=0.01%, 4=0.06%, 10=72.83%, 20=25.35%, 50=1.75% 00:20:59.777 cpu : usr=3.09%, sys=5.29%, ctx=703, majf=0, minf=1 00:20:59.777 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:59.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.777 issued rwts: total=6656,7038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.777 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.777 job3: (groupid=0, jobs=1): err= 0: pid=3990757: Fri Jul 12 01:39:26 2024 00:20:59.777 read: IOPS=4176, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1011msec) 00:20:59.777 slat (nsec): min=1294, max=15739k, avg=120438.98, stdev=961759.02 00:20:59.777 clat (usec): min=1351, max=87210, avg=15015.29, stdev=10369.10 00:20:59.777 lat (usec): min=2003, max=87217, avg=15135.73, stdev=10471.16 00:20:59.777 clat percentiles (usec): 00:20:59.777 | 1.00th=[ 2999], 5.00th=[ 5538], 10.00th=[ 6587], 20.00th=[ 8160], 00:20:59.777 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11469], 60.00th=[14615], 00:20:59.777 | 70.00th=[18220], 80.00th=[20317], 90.00th=[25035], 95.00th=[27657], 00:20:59.777 | 99.00th=[68682], 99.50th=[74974], 99.90th=[87557], 99.95th=[87557], 00:20:59.777 | 99.99th=[87557] 00:20:59.777 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:20:59.777 slat (nsec): min=1500, max=14614k, avg=86142.88, stdev=693175.51 00:20:59.777 clat (usec): min=632, max=87220, avg=14079.08, stdev=11916.98 00:20:59.777 lat (usec): min=639, max=87228, avg=14165.22, stdev=11984.51 00:20:59.777 clat percentiles (usec): 00:20:59.777 | 1.00th=[ 1237], 5.00th=[ 2409], 10.00th=[ 4359], 20.00th=[ 6783], 00:20:59.777 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[13173], 00:20:59.777 | 70.00th=[15533], 80.00th=[17433], 90.00th=[25297], 95.00th=[40633], 00:20:59.777 | 99.00th=[60031], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:20:59.777 | 99.99th=[87557] 00:20:59.777 bw ( KiB/s): min=16368, max=20480, per=23.47%, avg=18424.00, stdev=2907.62, samples=2 00:20:59.777 iops : min= 4092, max= 5120, avg=4606.00, stdev=726.91, samples=2 00:20:59.777 lat (usec) : 750=0.03%, 1000=0.09% 00:20:59.777 lat (msec) : 2=2.04%, 4=4.11%, 10=30.03%, 20=43.43%, 50=17.63% 00:20:59.777 lat (msec) : 100=2.63% 00:20:59.777 cpu : usr=3.17%, sys=5.15%, ctx=325, majf=0, minf=1 00:20:59.777 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:59.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.777 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.777 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.777 00:20:59.777 Run status group 0 (all jobs): 00:20:59.777 READ: bw=71.9MiB/s (75.4MB/s), 14.5MiB/s-25.9MiB/s (15.2MB/s-27.2MB/s), io=72.7MiB (76.2MB), run=1003-1011msec 00:20:59.777 WRITE: bw=76.6MiB/s (80.4MB/s), 15.8MiB/s-27.4MiB/s (16.6MB/s-28.7MB/s), io=77.5MiB (81.3MB), run=1003-1011msec 00:20:59.777 00:20:59.777 Disk stats (read/write): 00:20:59.777 nvme0n1: ios=3239/3584, merge=0/0, ticks=20253/21467, in_queue=41720, util=88.28% 00:20:59.777 nvme0n2: ios=2591/2894, merge=0/0, ticks=36566/62627, in_queue=99193, util=96.33% 00:20:59.777 nvme0n3: ios=5669/5847, merge=0/0, ticks=25103/23080, in_queue=48183, util=96.94% 00:20:59.777 nvme0n4: ios=4096/4207, merge=0/0, ticks=53172/45065, in_queue=98237, util=89.53% 00:20:59.777 01:39:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:59.777 01:39:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3991070 00:20:59.777 01:39:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:59.777 01:39:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:00.036 [global] 00:21:00.036 thread=1 00:21:00.036 invalidate=1 00:21:00.036 rw=read 00:21:00.036 time_based=1 00:21:00.036 runtime=10 00:21:00.036 ioengine=libaio 00:21:00.036 direct=1 00:21:00.036 bs=4096 00:21:00.036 iodepth=1 00:21:00.036 norandommap=1 00:21:00.036 numjobs=1 00:21:00.036 00:21:00.036 [job0] 00:21:00.036 filename=/dev/nvme0n1 00:21:00.036 [job1] 00:21:00.036 filename=/dev/nvme0n2 00:21:00.036 [job2] 00:21:00.036 filename=/dev/nvme0n3 00:21:00.036 [job3] 00:21:00.036 filename=/dev/nvme0n4 00:21:00.036 Could not set queue depth (nvme0n1) 00:21:00.036 Could not set queue depth (nvme0n2) 00:21:00.036 Could not set queue depth (nvme0n3) 00:21:00.036 Could not set queue depth (nvme0n4) 00:21:00.294 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.294 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.294 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.294 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.294 fio-3.35 00:21:00.294 Starting 4 threads 00:21:02.832 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:03.094 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=262144, buflen=4096 00:21:03.094 fio: pid=3991266, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:03.094 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:03.355 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.355 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:03.355 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=368640, buflen=4096 00:21:03.355 fio: pid=3991262, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:03.355 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10727424, buflen=4096 00:21:03.355 fio: pid=3991260, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:03.355 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.355 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:03.615 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=10612736, buflen=4096 00:21:03.615 fio: pid=3991261, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:21:03.615 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.615 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:03.615 00:21:03.615 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3991260: Fri Jul 12 01:39:29 2024 00:21:03.615 read: IOPS=906, BW=3626KiB/s (3713kB/s)(10.2MiB/2889msec) 00:21:03.615 slat (usec): min=6, max=13768, avg=33.82, stdev=340.41 00:21:03.615 clat (usec): min=527, max=1408, avg=1062.49, stdev=157.07 00:21:03.615 lat (usec): min=551, max=15027, avg=1096.32, stdev=379.43 00:21:03.615 clat percentiles (usec): 00:21:03.615 | 1.00th=[ 635], 5.00th=[ 750], 10.00th=[ 824], 20.00th=[ 930], 00:21:03.615 | 30.00th=[ 1012], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1139], 00:21:03.615 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1254], 00:21:03.615 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1401], 99.95th=[ 1401], 00:21:03.615 | 99.99th=[ 1401] 00:21:03.615 bw ( KiB/s): min= 3368, max= 4144, per=52.36%, avg=3675.20, stdev=327.35, samples=5 00:21:03.615 iops : min= 842, max= 1036, avg=918.80, stdev=81.84, samples=5 00:21:03.615 lat (usec) : 750=5.42%, 1000=23.47% 00:21:03.615 lat (msec) : 2=71.07% 00:21:03.615 cpu : usr=0.97%, sys=2.63%, ctx=2623, majf=0, minf=1 00:21:03.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.615 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.615 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:03.615 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3991261: Fri Jul 12 01:39:29 2024 00:21:03.615 read: IOPS=847, BW=3390KiB/s (3472kB/s)(10.1MiB/3057msec) 00:21:03.615 slat (usec): min=6, max=18330, avg=49.63, stdev=531.45 00:21:03.615 clat (usec): min=613, max=7132, avg=1123.35, stdev=188.06 00:21:03.615 lat (usec): min=648, max=19447, avg=1172.99, stdev=566.32 00:21:03.615 clat percentiles (usec): 00:21:03.615 | 1.00th=[ 840], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1045], 00:21:03.615 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1156], 00:21:03.615 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1221], 95.00th=[ 1254], 00:21:03.615 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1401], 99.95th=[ 6783], 00:21:03.616 | 99.99th=[ 7111] 00:21:03.616 bw ( KiB/s): min= 3344, max= 3672, per=49.64%, avg=3484.80, stdev=136.19, samples=5 00:21:03.616 iops : min= 836, max= 918, avg=871.20, stdev=34.05, samples=5 00:21:03.616 lat (usec) : 750=0.23%, 1000=11.69% 00:21:03.616 lat (msec) : 2=87.96%, 10=0.08% 00:21:03.616 cpu : usr=1.57%, sys=3.37%, ctx=2597, majf=0, minf=1 00:21:03.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.616 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.616 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:03.616 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3991262: Fri Jul 12 01:39:29 2024 00:21:03.616 read: IOPS=32, BW=130KiB/s (133kB/s)(360KiB/2763msec) 00:21:03.616 slat (usec): min=8, max=12534, avg=161.86, stdev=1311.45 00:21:03.616 clat (usec): min=708, max=43024, avg=30523.47, stdev=18441.22 00:21:03.616 lat (usec): min=743, max=53925, avg=30686.85, stdev=18570.55 00:21:03.616 clat percentiles (usec): 00:21:03.616 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 1074], 00:21:03.616 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:21:03.616 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:21:03.616 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:03.616 | 99.99th=[43254] 00:21:03.616 bw ( KiB/s): min= 96, max= 168, per=1.91%, avg=134.40, stdev=29.61, samples=5 00:21:03.616 iops : min= 24, max= 42, avg=33.60, stdev= 7.40, samples=5 00:21:03.616 lat (usec) : 750=2.20%, 1000=14.29% 00:21:03.616 lat (msec) : 2=10.99%, 50=71.43% 00:21:03.616 cpu : usr=0.14%, sys=0.00%, ctx=92, majf=0, minf=1 00:21:03.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.616 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.616 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:03.616 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3991266: Fri Jul 12 01:39:29 2024 00:21:03.616 read: IOPS=25, BW=99.3KiB/s (102kB/s)(256KiB/2579msec) 00:21:03.616 slat (nsec): min=24221, max=59620, avg=25382.54, stdev=4349.15 00:21:03.616 clat (usec): min=1131, max=43027, avg=40238.77, stdev=8747.38 00:21:03.616 lat (usec): min=1158, max=43052, avg=40264.16, stdev=8744.51 00:21:03.616 clat percentiles (usec): 00:21:03.616 | 1.00th=[ 1139], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:21:03.616 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:21:03.616 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:21:03.616 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:03.616 | 99.99th=[43254] 00:21:03.616 bw ( KiB/s): min= 96, max= 112, per=1.41%, avg=99.20, stdev= 7.16, samples=5 00:21:03.616 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:21:03.616 lat (msec) : 2=4.62%, 50=93.85% 00:21:03.616 cpu : usr=0.12%, sys=0.00%, ctx=66, majf=0, minf=2 00:21:03.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.616 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.616 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:03.616 00:21:03.616 Run status group 0 (all jobs): 00:21:03.616 READ: bw=7019KiB/s (7187kB/s), 99.3KiB/s-3626KiB/s (102kB/s-3713kB/s), io=21.0MiB (22.0MB), run=2579-3057msec 00:21:03.616 00:21:03.616 Disk stats (read/write): 00:21:03.616 nvme0n1: ios=2576/0, merge=0/0, ticks=2631/0, in_queue=2631, util=93.99% 00:21:03.616 nvme0n2: ios=2448/0, merge=0/0, ticks=2459/0, in_queue=2459, util=94.66% 00:21:03.616 nvme0n3: ios=86/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.03% 00:21:03.616 nvme0n4: ios=58/0, merge=0/0, ticks=2324/0, in_queue=2324, util=96.02% 00:21:03.616 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.616 01:39:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:03.877 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.877 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:04.136 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:04.136 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:04.136 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:04.136 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3991070 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:04.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:04.395 nvmf hotplug test: fio failed as expected 00:21:04.395 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.654 rmmod nvme_tcp 00:21:04.654 rmmod nvme_fabrics 00:21:04.654 rmmod nvme_keyring 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3987573 ']' 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3987573 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3987573 ']' 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3987573 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.654 01:39:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3987573 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3987573' 00:21:04.914 killing process with pid 3987573 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3987573 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3987573 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.914 01:39:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.467 01:39:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.468 00:21:07.468 real 0m29.043s 00:21:07.468 user 2m26.949s 00:21:07.468 sys 0m9.696s 00:21:07.468 01:39:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:07.468 01:39:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.468 ************************************ 00:21:07.468 END TEST nvmf_fio_target 00:21:07.468 ************************************ 00:21:07.468 01:39:33 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:07.468 01:39:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:07.468 01:39:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:07.468 01:39:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.468 ************************************ 00:21:07.468 START TEST nvmf_bdevio 00:21:07.468 ************************************ 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:07.468 * Looking for test storage... 00:21:07.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.468 01:39:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:21:15.611 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:15.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:15.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:15.612 Found net devices under 0000:31:00.0: cvl_0_0 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:15.612 Found net devices under 0000:31:00.1: cvl_0_1 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:21:15.612 00:21:15.612 --- 10.0.0.2 ping statistics --- 00:21:15.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.612 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:21:15.612 00:21:15.612 --- 10.0.0.1 ping statistics --- 00:21:15.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.612 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3996958 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3996958 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3996958 ']' 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.612 01:39:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:15.612 [2024-07-12 01:39:41.783490] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:15.612 [2024-07-12 01:39:41.783554] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.612 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.612 [2024-07-12 01:39:41.878475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.612 [2024-07-12 01:39:41.927766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.612 [2024-07-12 01:39:41.927827] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.612 [2024-07-12 01:39:41.927836] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.612 [2024-07-12 01:39:41.927843] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.613 [2024-07-12 01:39:41.927849] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.613 [2024-07-12 01:39:41.928010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:15.613 [2024-07-12 01:39:41.928172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:15.613 [2024-07-12 01:39:41.928300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:15.613 [2024-07-12 01:39:41.928328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.285 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.285 [2024-07-12 01:39:42.635594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.545 Malloc0 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.545 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.545 [2024-07-12 01:39:42.700558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.546 { 00:21:16.546 "params": { 00:21:16.546 "name": "Nvme$subsystem", 00:21:16.546 "trtype": "$TEST_TRANSPORT", 00:21:16.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.546 "adrfam": "ipv4", 00:21:16.546 "trsvcid": "$NVMF_PORT", 00:21:16.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.546 "hdgst": ${hdgst:-false}, 00:21:16.546 "ddgst": ${ddgst:-false} 00:21:16.546 }, 00:21:16.546 "method": "bdev_nvme_attach_controller" 00:21:16.546 } 00:21:16.546 EOF 00:21:16.546 )") 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:21:16.546 01:39:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:16.546 "params": { 00:21:16.546 "name": "Nvme1", 00:21:16.546 "trtype": "tcp", 00:21:16.546 "traddr": "10.0.0.2", 00:21:16.546 "adrfam": "ipv4", 00:21:16.546 "trsvcid": "4420", 00:21:16.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.546 "hdgst": false, 00:21:16.546 "ddgst": false 00:21:16.546 }, 00:21:16.546 "method": "bdev_nvme_attach_controller" 00:21:16.546 }' 00:21:16.546 [2024-07-12 01:39:42.757793] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:16.546 [2024-07-12 01:39:42.757861] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997041 ] 00:21:16.546 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.546 [2024-07-12 01:39:42.829738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:16.546 [2024-07-12 01:39:42.870214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.546 [2024-07-12 01:39:42.870363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.546 [2024-07-12 01:39:42.870458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.804 I/O targets: 00:21:16.804 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:16.804 00:21:16.804 00:21:16.804 CUnit - A unit testing framework for C - Version 2.1-3 00:21:16.804 http://cunit.sourceforge.net/ 00:21:16.804 00:21:16.804 00:21:16.804 Suite: bdevio tests on: Nvme1n1 00:21:17.064 Test: blockdev write read block ...passed 00:21:17.064 Test: blockdev write zeroes read block ...passed 00:21:17.064 Test: blockdev write zeroes read no split ...passed 00:21:17.064 Test: blockdev write zeroes read split ...passed 00:21:17.064 Test: blockdev write zeroes read split partial ...passed 00:21:17.064 Test: blockdev reset ...[2024-07-12 01:39:43.302632] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.064 [2024-07-12 01:39:43.302702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f323a0 (9): Bad file descriptor 00:21:17.064 [2024-07-12 01:39:43.370465] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:17.064 passed 00:21:17.064 Test: blockdev write read 8 blocks ...passed 00:21:17.324 Test: blockdev write read size > 128k ...passed 00:21:17.324 Test: blockdev write read invalid size ...passed 00:21:17.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:17.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:17.324 Test: blockdev write read max offset ...passed 00:21:17.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:17.324 Test: blockdev writev readv 8 blocks ...passed 00:21:17.324 Test: blockdev writev readv 30 x 1block ...passed 00:21:17.324 Test: blockdev writev readv block ...passed 00:21:17.324 Test: blockdev writev readv size > 128k ...passed 00:21:17.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:17.324 Test: blockdev comparev and writev ...[2024-07-12 01:39:43.671948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.671973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.671984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.672345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.672354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.672364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.672369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.672718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.672727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.672736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.672742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.673088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.673096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:17.324 [2024-07-12 01:39:43.673106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.324 [2024-07-12 01:39:43.673111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:17.585 passed 00:21:17.585 Test: blockdev nvme passthru rw ...passed 00:21:17.585 Test: blockdev nvme passthru vendor specific ...[2024-07-12 01:39:43.757756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.585 [2024-07-12 01:39:43.757766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:17.585 [2024-07-12 01:39:43.757976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.585 [2024-07-12 01:39:43.757984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:17.585 [2024-07-12 01:39:43.758205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.585 [2024-07-12 01:39:43.758213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:17.585 [2024-07-12 01:39:43.758428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.585 [2024-07-12 01:39:43.758439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:17.585 passed 00:21:17.585 Test: blockdev nvme admin passthru ...passed 00:21:17.585 Test: blockdev copy ...passed 00:21:17.585 00:21:17.585 Run Summary: Type Total Ran Passed Failed Inactive 00:21:17.585 suites 1 1 n/a 0 0 00:21:17.585 tests 23 23 23 0 0 00:21:17.585 asserts 152 152 152 0 n/a 00:21:17.585 00:21:17.585 Elapsed time = 1.373 seconds 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.585 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.585 rmmod nvme_tcp 00:21:17.846 rmmod nvme_fabrics 00:21:17.846 rmmod nvme_keyring 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3996958 ']' 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3996958 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3996958 ']' 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3996958 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.846 01:39:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3996958 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3996958' 00:21:17.846 killing process with pid 3996958 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3996958 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3996958 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.846 01:39:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.391 01:39:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.391 00:21:20.391 real 0m12.953s 00:21:20.391 user 0m13.754s 00:21:20.391 sys 0m6.702s 00:21:20.391 01:39:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:20.391 01:39:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:20.391 ************************************ 00:21:20.391 END TEST nvmf_bdevio 00:21:20.391 ************************************ 00:21:20.391 01:39:46 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:20.391 01:39:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:20.391 01:39:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:20.391 01:39:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.391 ************************************ 00:21:20.391 START TEST nvmf_auth_target 00:21:20.391 ************************************ 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:20.391 * Looking for test storage... 00:21:20.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.391 01:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.528 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:28.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:28.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:28.529 Found net devices under 0000:31:00.0: cvl_0_0 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:28.529 Found net devices under 0000:31:00.1: cvl_0_1 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:21:28.529 00:21:28.529 --- 10.0.0.2 ping statistics --- 00:21:28.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.529 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:21:28.529 00:21:28.529 --- 10.0.0.1 ping statistics --- 00:21:28.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.529 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4002002 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4002002 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4002002 ']' 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:28.529 01:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4002348 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77dbe3bba95b1108410b364b4b35212ecc35a57c6f0261b6 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gU1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77dbe3bba95b1108410b364b4b35212ecc35a57c6f0261b6 0 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77dbe3bba95b1108410b364b4b35212ecc35a57c6f0261b6 0 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77dbe3bba95b1108410b364b4b35212ecc35a57c6f0261b6 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gU1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gU1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.gU1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bfd69a7f43be79c7d1ce3ea3f9ffc978976f4a500d06146df0877387f3242002 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Qtq 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bfd69a7f43be79c7d1ce3ea3f9ffc978976f4a500d06146df0877387f3242002 3 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bfd69a7f43be79c7d1ce3ea3f9ffc978976f4a500d06146df0877387f3242002 3 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bfd69a7f43be79c7d1ce3ea3f9ffc978976f4a500d06146df0877387f3242002 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Qtq 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Qtq 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Qtq 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ce198e6bf1c46c6fe024898f91e02b1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yfH 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ce198e6bf1c46c6fe024898f91e02b1 1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ce198e6bf1c46c6fe024898f91e02b1 1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ce198e6bf1c46c6fe024898f91e02b1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yfH 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yfH 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.yfH 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f0e6a43b7f4b63d3e69193387e744cb40c070ab02f57a160 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.F7t 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f0e6a43b7f4b63d3e69193387e744cb40c070ab02f57a160 2 00:21:29.473 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f0e6a43b7f4b63d3e69193387e744cb40c070ab02f57a160 2 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f0e6a43b7f4b63d3e69193387e744cb40c070ab02f57a160 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.F7t 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.F7t 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.F7t 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:29.474 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f35813c495ccb997daa428795b1d079b6e98dedf1f2b36f2 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xwU 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f35813c495ccb997daa428795b1d079b6e98dedf1f2b36f2 2 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f35813c495ccb997daa428795b1d079b6e98dedf1f2b36f2 2 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f35813c495ccb997daa428795b1d079b6e98dedf1f2b36f2 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xwU 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xwU 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.xwU 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5c4de00da0101cf579dd632990f760f5 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NJJ 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5c4de00da0101cf579dd632990f760f5 1 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5c4de00da0101cf579dd632990f760f5 1 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5c4de00da0101cf579dd632990f760f5 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NJJ 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NJJ 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.NJJ 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d2d029cf0ee1eb6b74844ecbb97033d65d24539265012a1d56c90d78d0288797 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PDa 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d2d029cf0ee1eb6b74844ecbb97033d65d24539265012a1d56c90d78d0288797 3 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d2d029cf0ee1eb6b74844ecbb97033d65d24539265012a1d56c90d78d0288797 3 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d2d029cf0ee1eb6b74844ecbb97033d65d24539265012a1d56c90d78d0288797 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:29.735 01:39:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PDa 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PDa 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.PDa 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4002002 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4002002 ']' 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.735 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4002348 /var/tmp/host.sock 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4002348 ']' 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:29.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gU1 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.995 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gU1 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gU1 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Qtq ]] 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qtq 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qtq 00:21:30.256 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qtq 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yfH 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.yfH 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.yfH 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.F7t ]] 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F7t 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F7t 00:21:30.517 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F7t 00:21:30.777 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:30.777 01:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xwU 00:21:30.777 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.777 01:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.777 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.777 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xwU 00:21:30.777 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xwU 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.NJJ ]] 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJJ 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJJ 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJJ 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PDa 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PDa 00:21:31.037 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PDa 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:31.297 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:31.298 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.298 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.298 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.298 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.298 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.558 01:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.558 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.558 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.558 00:21:31.558 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.558 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.558 01:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.818 { 00:21:31.818 "cntlid": 1, 00:21:31.818 "qid": 0, 00:21:31.818 "state": "enabled", 00:21:31.818 "listen_address": { 00:21:31.818 "trtype": "TCP", 00:21:31.818 "adrfam": "IPv4", 00:21:31.818 "traddr": "10.0.0.2", 00:21:31.818 "trsvcid": "4420" 00:21:31.818 }, 00:21:31.818 "peer_address": { 00:21:31.818 "trtype": "TCP", 00:21:31.818 "adrfam": "IPv4", 00:21:31.818 "traddr": "10.0.0.1", 00:21:31.818 "trsvcid": "44936" 00:21:31.818 }, 00:21:31.818 "auth": { 00:21:31.818 "state": "completed", 00:21:31.818 "digest": "sha256", 00:21:31.818 "dhgroup": "null" 00:21:31.818 } 00:21:31.818 } 00:21:31.818 ]' 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:31.818 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.078 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.078 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.078 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.078 01:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.018 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.278 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.278 { 00:21:33.278 "cntlid": 3, 00:21:33.278 "qid": 0, 00:21:33.278 "state": "enabled", 00:21:33.278 "listen_address": { 00:21:33.278 "trtype": "TCP", 00:21:33.278 "adrfam": "IPv4", 00:21:33.278 "traddr": "10.0.0.2", 00:21:33.278 "trsvcid": "4420" 00:21:33.278 }, 00:21:33.278 "peer_address": { 00:21:33.278 "trtype": "TCP", 00:21:33.278 "adrfam": "IPv4", 00:21:33.278 "traddr": "10.0.0.1", 00:21:33.278 "trsvcid": "44958" 00:21:33.278 }, 00:21:33.278 "auth": { 00:21:33.278 "state": "completed", 00:21:33.278 "digest": "sha256", 00:21:33.278 "dhgroup": "null" 00:21:33.278 } 00:21:33.278 } 00:21:33.278 ]' 00:21:33.278 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.538 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.798 01:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:34.369 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.629 01:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.890 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.890 { 00:21:34.890 "cntlid": 5, 00:21:34.890 "qid": 0, 00:21:34.890 "state": "enabled", 00:21:34.890 "listen_address": { 00:21:34.890 "trtype": "TCP", 00:21:34.890 "adrfam": "IPv4", 00:21:34.890 "traddr": "10.0.0.2", 00:21:34.890 "trsvcid": "4420" 00:21:34.890 }, 00:21:34.890 "peer_address": { 00:21:34.890 "trtype": "TCP", 00:21:34.890 "adrfam": "IPv4", 00:21:34.890 "traddr": "10.0.0.1", 00:21:34.890 "trsvcid": "44986" 00:21:34.890 }, 00:21:34.890 "auth": { 00:21:34.890 "state": "completed", 00:21:34.890 "digest": "sha256", 00:21:34.890 "dhgroup": "null" 00:21:34.890 } 00:21:34.890 } 00:21:34.890 ]' 00:21:34.890 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.150 01:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.092 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.352 00:21:36.352 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.352 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.352 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.612 { 00:21:36.612 "cntlid": 7, 00:21:36.612 "qid": 0, 00:21:36.612 "state": "enabled", 00:21:36.612 "listen_address": { 00:21:36.612 "trtype": "TCP", 00:21:36.612 "adrfam": "IPv4", 00:21:36.612 "traddr": "10.0.0.2", 00:21:36.612 "trsvcid": "4420" 00:21:36.612 }, 00:21:36.612 "peer_address": { 00:21:36.612 "trtype": "TCP", 00:21:36.612 "adrfam": "IPv4", 00:21:36.612 "traddr": "10.0.0.1", 00:21:36.612 "trsvcid": "45008" 00:21:36.612 }, 00:21:36.612 "auth": { 00:21:36.612 "state": "completed", 00:21:36.612 "digest": "sha256", 00:21:36.612 "dhgroup": "null" 00:21:36.612 } 00:21:36.612 } 00:21:36.612 ]' 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.612 01:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.872 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:21:37.451 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.451 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.451 01:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.451 01:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 01:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.712 01:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.972 00:21:37.972 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.972 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.972 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.231 { 00:21:38.231 "cntlid": 9, 00:21:38.231 "qid": 0, 00:21:38.231 "state": "enabled", 00:21:38.231 "listen_address": { 00:21:38.231 "trtype": "TCP", 00:21:38.231 "adrfam": "IPv4", 00:21:38.231 "traddr": "10.0.0.2", 00:21:38.231 "trsvcid": "4420" 00:21:38.231 }, 00:21:38.231 "peer_address": { 00:21:38.231 "trtype": "TCP", 00:21:38.231 "adrfam": "IPv4", 00:21:38.231 "traddr": "10.0.0.1", 00:21:38.231 "trsvcid": "34542" 00:21:38.231 }, 00:21:38.231 "auth": { 00:21:38.231 "state": "completed", 00:21:38.231 "digest": "sha256", 00:21:38.231 "dhgroup": "ffdhe2048" 00:21:38.231 } 00:21:38.231 } 00:21:38.231 ]' 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.231 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.491 01:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:39.061 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:39.320 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:21:39.320 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.321 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.580 00:21:39.580 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.580 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.580 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.840 { 00:21:39.840 "cntlid": 11, 00:21:39.840 "qid": 0, 00:21:39.840 "state": "enabled", 00:21:39.840 "listen_address": { 00:21:39.840 "trtype": "TCP", 00:21:39.840 "adrfam": "IPv4", 00:21:39.840 "traddr": "10.0.0.2", 00:21:39.840 "trsvcid": "4420" 00:21:39.840 }, 00:21:39.840 "peer_address": { 00:21:39.840 "trtype": "TCP", 00:21:39.840 "adrfam": "IPv4", 00:21:39.840 "traddr": "10.0.0.1", 00:21:39.840 "trsvcid": "34560" 00:21:39.840 }, 00:21:39.840 "auth": { 00:21:39.840 "state": "completed", 00:21:39.840 "digest": "sha256", 00:21:39.840 "dhgroup": "ffdhe2048" 00:21:39.840 } 00:21:39.840 } 00:21:39.840 ]' 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.840 01:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.840 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.840 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.840 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.840 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.840 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.840 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.100 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.671 01:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.932 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.194 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.194 01:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.455 { 00:21:41.455 "cntlid": 13, 00:21:41.455 "qid": 0, 00:21:41.455 "state": "enabled", 00:21:41.455 "listen_address": { 00:21:41.455 "trtype": "TCP", 00:21:41.455 "adrfam": "IPv4", 00:21:41.455 "traddr": "10.0.0.2", 00:21:41.455 "trsvcid": "4420" 00:21:41.455 }, 00:21:41.455 "peer_address": { 00:21:41.455 "trtype": "TCP", 00:21:41.455 "adrfam": "IPv4", 00:21:41.455 "traddr": "10.0.0.1", 00:21:41.455 "trsvcid": "34586" 00:21:41.455 }, 00:21:41.455 "auth": { 00:21:41.455 "state": "completed", 00:21:41.455 "digest": "sha256", 00:21:41.455 "dhgroup": "ffdhe2048" 00:21:41.455 } 00:21:41.455 } 00:21:41.455 ]' 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.455 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.716 01:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.284 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.544 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.805 00:21:42.805 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.805 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.805 01:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.805 { 00:21:42.805 "cntlid": 15, 00:21:42.805 "qid": 0, 00:21:42.805 "state": "enabled", 00:21:42.805 "listen_address": { 00:21:42.805 "trtype": "TCP", 00:21:42.805 "adrfam": "IPv4", 00:21:42.805 "traddr": "10.0.0.2", 00:21:42.805 "trsvcid": "4420" 00:21:42.805 }, 00:21:42.805 "peer_address": { 00:21:42.805 "trtype": "TCP", 00:21:42.805 "adrfam": "IPv4", 00:21:42.805 "traddr": "10.0.0.1", 00:21:42.805 "trsvcid": "34616" 00:21:42.805 }, 00:21:42.805 "auth": { 00:21:42.805 "state": "completed", 00:21:42.805 "digest": "sha256", 00:21:42.805 "dhgroup": "ffdhe2048" 00:21:42.805 } 00:21:42.805 } 00:21:42.805 ]' 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.805 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.066 01:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.008 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.269 00:21:44.269 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.269 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.269 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.529 { 00:21:44.529 "cntlid": 17, 00:21:44.529 "qid": 0, 00:21:44.529 "state": "enabled", 00:21:44.529 "listen_address": { 00:21:44.529 "trtype": "TCP", 00:21:44.529 "adrfam": "IPv4", 00:21:44.529 "traddr": "10.0.0.2", 00:21:44.529 "trsvcid": "4420" 00:21:44.529 }, 00:21:44.529 "peer_address": { 00:21:44.529 "trtype": "TCP", 00:21:44.529 "adrfam": "IPv4", 00:21:44.529 "traddr": "10.0.0.1", 00:21:44.529 "trsvcid": "34640" 00:21:44.529 }, 00:21:44.529 "auth": { 00:21:44.529 "state": "completed", 00:21:44.529 "digest": "sha256", 00:21:44.529 "dhgroup": "ffdhe3072" 00:21:44.529 } 00:21:44.529 } 00:21:44.529 ]' 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.529 01:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.791 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.732 01:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.993 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.993 { 00:21:45.993 "cntlid": 19, 00:21:45.993 "qid": 0, 00:21:45.993 "state": "enabled", 00:21:45.993 "listen_address": { 00:21:45.993 "trtype": "TCP", 00:21:45.993 "adrfam": "IPv4", 00:21:45.993 "traddr": "10.0.0.2", 00:21:45.993 "trsvcid": "4420" 00:21:45.993 }, 00:21:45.993 "peer_address": { 00:21:45.993 "trtype": "TCP", 00:21:45.993 "adrfam": "IPv4", 00:21:45.993 "traddr": "10.0.0.1", 00:21:45.993 "trsvcid": "34670" 00:21:45.993 }, 00:21:45.993 "auth": { 00:21:45.993 "state": "completed", 00:21:45.993 "digest": "sha256", 00:21:45.993 "dhgroup": "ffdhe3072" 00:21:45.993 } 00:21:45.993 } 00:21:45.993 ]' 00:21:45.993 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.254 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.514 01:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:47.085 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.344 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.345 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.345 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.605 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.605 { 00:21:47.605 "cntlid": 21, 00:21:47.605 "qid": 0, 00:21:47.605 "state": "enabled", 00:21:47.605 "listen_address": { 00:21:47.605 "trtype": "TCP", 00:21:47.605 "adrfam": "IPv4", 00:21:47.605 "traddr": "10.0.0.2", 00:21:47.605 "trsvcid": "4420" 00:21:47.605 }, 00:21:47.605 "peer_address": { 00:21:47.605 "trtype": "TCP", 00:21:47.605 "adrfam": "IPv4", 00:21:47.605 "traddr": "10.0.0.1", 00:21:47.605 "trsvcid": "52952" 00:21:47.605 }, 00:21:47.605 "auth": { 00:21:47.605 "state": "completed", 00:21:47.605 "digest": "sha256", 00:21:47.605 "dhgroup": "ffdhe3072" 00:21:47.605 } 00:21:47.605 } 00:21:47.605 ]' 00:21:47.605 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.866 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.866 01:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.866 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.866 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.866 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.866 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.866 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.127 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.698 01:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.959 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.219 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.219 { 00:21:49.219 "cntlid": 23, 00:21:49.219 "qid": 0, 00:21:49.219 "state": "enabled", 00:21:49.219 "listen_address": { 00:21:49.219 "trtype": "TCP", 00:21:49.219 "adrfam": "IPv4", 00:21:49.219 "traddr": "10.0.0.2", 00:21:49.219 "trsvcid": "4420" 00:21:49.219 }, 00:21:49.219 "peer_address": { 00:21:49.219 "trtype": "TCP", 00:21:49.219 "adrfam": "IPv4", 00:21:49.219 "traddr": "10.0.0.1", 00:21:49.219 "trsvcid": "52986" 00:21:49.219 }, 00:21:49.219 "auth": { 00:21:49.219 "state": "completed", 00:21:49.219 "digest": "sha256", 00:21:49.219 "dhgroup": "ffdhe3072" 00:21:49.219 } 00:21:49.219 } 00:21:49.219 ]' 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.219 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.480 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.480 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.480 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.480 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.480 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.480 01:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.422 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.423 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.423 01:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.423 01:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.423 01:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.423 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.423 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.683 00:21:50.683 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.683 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.683 01:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.944 { 00:21:50.944 "cntlid": 25, 00:21:50.944 "qid": 0, 00:21:50.944 "state": "enabled", 00:21:50.944 "listen_address": { 00:21:50.944 "trtype": "TCP", 00:21:50.944 "adrfam": "IPv4", 00:21:50.944 "traddr": "10.0.0.2", 00:21:50.944 "trsvcid": "4420" 00:21:50.944 }, 00:21:50.944 "peer_address": { 00:21:50.944 "trtype": "TCP", 00:21:50.944 "adrfam": "IPv4", 00:21:50.944 "traddr": "10.0.0.1", 00:21:50.944 "trsvcid": "53014" 00:21:50.944 }, 00:21:50.944 "auth": { 00:21:50.944 "state": "completed", 00:21:50.944 "digest": "sha256", 00:21:50.944 "dhgroup": "ffdhe4096" 00:21:50.944 } 00:21:50.944 } 00:21:50.944 ]' 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.944 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.256 01:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:51.850 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.110 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.371 00:21:52.371 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.371 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.371 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.632 { 00:21:52.632 "cntlid": 27, 00:21:52.632 "qid": 0, 00:21:52.632 "state": "enabled", 00:21:52.632 "listen_address": { 00:21:52.632 "trtype": "TCP", 00:21:52.632 "adrfam": "IPv4", 00:21:52.632 "traddr": "10.0.0.2", 00:21:52.632 "trsvcid": "4420" 00:21:52.632 }, 00:21:52.632 "peer_address": { 00:21:52.632 "trtype": "TCP", 00:21:52.632 "adrfam": "IPv4", 00:21:52.632 "traddr": "10.0.0.1", 00:21:52.632 "trsvcid": "53030" 00:21:52.632 }, 00:21:52.632 "auth": { 00:21:52.632 "state": "completed", 00:21:52.632 "digest": "sha256", 00:21:52.632 "dhgroup": "ffdhe4096" 00:21:52.632 } 00:21:52.632 } 00:21:52.632 ]' 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.632 01:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.893 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.464 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.725 01:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.986 00:21:53.986 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.986 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.986 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.246 { 00:21:54.246 "cntlid": 29, 00:21:54.246 "qid": 0, 00:21:54.246 "state": "enabled", 00:21:54.246 "listen_address": { 00:21:54.246 "trtype": "TCP", 00:21:54.246 "adrfam": "IPv4", 00:21:54.246 "traddr": "10.0.0.2", 00:21:54.246 "trsvcid": "4420" 00:21:54.246 }, 00:21:54.246 "peer_address": { 00:21:54.246 "trtype": "TCP", 00:21:54.246 "adrfam": "IPv4", 00:21:54.246 "traddr": "10.0.0.1", 00:21:54.246 "trsvcid": "53066" 00:21:54.246 }, 00:21:54.246 "auth": { 00:21:54.246 "state": "completed", 00:21:54.246 "digest": "sha256", 00:21:54.246 "dhgroup": "ffdhe4096" 00:21:54.246 } 00:21:54.246 } 00:21:54.246 ]' 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.246 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.507 01:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.078 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.337 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.596 00:21:55.596 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.596 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.596 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.855 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.855 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.855 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.855 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.855 01:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.855 01:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.855 { 00:21:55.855 "cntlid": 31, 00:21:55.855 "qid": 0, 00:21:55.855 "state": "enabled", 00:21:55.855 "listen_address": { 00:21:55.855 "trtype": "TCP", 00:21:55.855 "adrfam": "IPv4", 00:21:55.855 "traddr": "10.0.0.2", 00:21:55.855 "trsvcid": "4420" 00:21:55.855 }, 00:21:55.855 "peer_address": { 00:21:55.855 "trtype": "TCP", 00:21:55.855 "adrfam": "IPv4", 00:21:55.855 "traddr": "10.0.0.1", 00:21:55.855 "trsvcid": "53092" 00:21:55.855 }, 00:21:55.855 "auth": { 00:21:55.855 "state": "completed", 00:21:55.855 "digest": "sha256", 00:21:55.855 "dhgroup": "ffdhe4096" 00:21:55.855 } 00:21:55.855 } 00:21:55.855 ]' 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.855 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.115 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:21:56.685 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.686 01:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.946 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.204 00:21:57.204 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.204 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.204 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.462 { 00:21:57.462 "cntlid": 33, 00:21:57.462 "qid": 0, 00:21:57.462 "state": "enabled", 00:21:57.462 "listen_address": { 00:21:57.462 "trtype": "TCP", 00:21:57.462 "adrfam": "IPv4", 00:21:57.462 "traddr": "10.0.0.2", 00:21:57.462 "trsvcid": "4420" 00:21:57.462 }, 00:21:57.462 "peer_address": { 00:21:57.462 "trtype": "TCP", 00:21:57.462 "adrfam": "IPv4", 00:21:57.462 "traddr": "10.0.0.1", 00:21:57.462 "trsvcid": "42782" 00:21:57.462 }, 00:21:57.462 "auth": { 00:21:57.462 "state": "completed", 00:21:57.462 "digest": "sha256", 00:21:57.462 "dhgroup": "ffdhe6144" 00:21:57.462 } 00:21:57.462 } 00:21:57.462 ]' 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.462 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.463 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.722 01:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.291 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.551 01:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.811 00:21:58.811 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.811 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.811 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.071 { 00:21:59.071 "cntlid": 35, 00:21:59.071 "qid": 0, 00:21:59.071 "state": "enabled", 00:21:59.071 "listen_address": { 00:21:59.071 "trtype": "TCP", 00:21:59.071 "adrfam": "IPv4", 00:21:59.071 "traddr": "10.0.0.2", 00:21:59.071 "trsvcid": "4420" 00:21:59.071 }, 00:21:59.071 "peer_address": { 00:21:59.071 "trtype": "TCP", 00:21:59.071 "adrfam": "IPv4", 00:21:59.071 "traddr": "10.0.0.1", 00:21:59.071 "trsvcid": "42822" 00:21:59.071 }, 00:21:59.071 "auth": { 00:21:59.071 "state": "completed", 00:21:59.071 "digest": "sha256", 00:21:59.071 "dhgroup": "ffdhe6144" 00:21:59.071 } 00:21:59.071 } 00:21:59.071 ]' 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.071 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.330 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.330 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.330 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.330 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.330 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.330 01:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.267 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.527 00:22:00.528 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.528 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.528 01:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.789 { 00:22:00.789 "cntlid": 37, 00:22:00.789 "qid": 0, 00:22:00.789 "state": "enabled", 00:22:00.789 "listen_address": { 00:22:00.789 "trtype": "TCP", 00:22:00.789 "adrfam": "IPv4", 00:22:00.789 "traddr": "10.0.0.2", 00:22:00.789 "trsvcid": "4420" 00:22:00.789 }, 00:22:00.789 "peer_address": { 00:22:00.789 "trtype": "TCP", 00:22:00.789 "adrfam": "IPv4", 00:22:00.789 "traddr": "10.0.0.1", 00:22:00.789 "trsvcid": "42848" 00:22:00.789 }, 00:22:00.789 "auth": { 00:22:00.789 "state": "completed", 00:22:00.789 "digest": "sha256", 00:22:00.789 "dhgroup": "ffdhe6144" 00:22:00.789 } 00:22:00.789 } 00:22:00.789 ]' 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.789 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.049 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.990 01:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.990 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.249 00:22:02.249 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.249 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.249 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.509 { 00:22:02.509 "cntlid": 39, 00:22:02.509 "qid": 0, 00:22:02.509 "state": "enabled", 00:22:02.509 "listen_address": { 00:22:02.509 "trtype": "TCP", 00:22:02.509 "adrfam": "IPv4", 00:22:02.509 "traddr": "10.0.0.2", 00:22:02.509 "trsvcid": "4420" 00:22:02.509 }, 00:22:02.509 "peer_address": { 00:22:02.509 "trtype": "TCP", 00:22:02.509 "adrfam": "IPv4", 00:22:02.509 "traddr": "10.0.0.1", 00:22:02.509 "trsvcid": "42874" 00:22:02.509 }, 00:22:02.509 "auth": { 00:22:02.509 "state": "completed", 00:22:02.509 "digest": "sha256", 00:22:02.509 "dhgroup": "ffdhe6144" 00:22:02.509 } 00:22:02.509 } 00:22:02.509 ]' 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.509 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.768 01:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:03.338 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.338 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.338 01:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.338 01:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.597 01:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.167 00:22:04.167 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.167 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.167 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.427 { 00:22:04.427 "cntlid": 41, 00:22:04.427 "qid": 0, 00:22:04.427 "state": "enabled", 00:22:04.427 "listen_address": { 00:22:04.427 "trtype": "TCP", 00:22:04.427 "adrfam": "IPv4", 00:22:04.427 "traddr": "10.0.0.2", 00:22:04.427 "trsvcid": "4420" 00:22:04.427 }, 00:22:04.427 "peer_address": { 00:22:04.427 "trtype": "TCP", 00:22:04.427 "adrfam": "IPv4", 00:22:04.427 "traddr": "10.0.0.1", 00:22:04.427 "trsvcid": "42896" 00:22:04.427 }, 00:22:04.427 "auth": { 00:22:04.427 "state": "completed", 00:22:04.427 "digest": "sha256", 00:22:04.427 "dhgroup": "ffdhe8192" 00:22:04.427 } 00:22:04.427 } 00:22:04.427 ]' 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.427 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.687 01:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.257 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.518 01:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.091 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.091 { 00:22:06.091 "cntlid": 43, 00:22:06.091 "qid": 0, 00:22:06.091 "state": "enabled", 00:22:06.091 "listen_address": { 00:22:06.091 "trtype": "TCP", 00:22:06.091 "adrfam": "IPv4", 00:22:06.091 "traddr": "10.0.0.2", 00:22:06.091 "trsvcid": "4420" 00:22:06.091 }, 00:22:06.091 "peer_address": { 00:22:06.091 "trtype": "TCP", 00:22:06.091 "adrfam": "IPv4", 00:22:06.091 "traddr": "10.0.0.1", 00:22:06.091 "trsvcid": "42926" 00:22:06.091 }, 00:22:06.091 "auth": { 00:22:06.091 "state": "completed", 00:22:06.091 "digest": "sha256", 00:22:06.091 "dhgroup": "ffdhe8192" 00:22:06.091 } 00:22:06.091 } 00:22:06.091 ]' 00:22:06.091 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.351 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.610 01:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.181 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.440 01:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.013 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.013 { 00:22:08.013 "cntlid": 45, 00:22:08.013 "qid": 0, 00:22:08.013 "state": "enabled", 00:22:08.013 "listen_address": { 00:22:08.013 "trtype": "TCP", 00:22:08.013 "adrfam": "IPv4", 00:22:08.013 "traddr": "10.0.0.2", 00:22:08.013 "trsvcid": "4420" 00:22:08.013 }, 00:22:08.013 "peer_address": { 00:22:08.013 "trtype": "TCP", 00:22:08.013 "adrfam": "IPv4", 00:22:08.013 "traddr": "10.0.0.1", 00:22:08.013 "trsvcid": "58652" 00:22:08.013 }, 00:22:08.013 "auth": { 00:22:08.013 "state": "completed", 00:22:08.013 "digest": "sha256", 00:22:08.013 "dhgroup": "ffdhe8192" 00:22:08.013 } 00:22:08.013 } 00:22:08.013 ]' 00:22:08.013 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.274 01:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.214 01:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.784 00:22:09.784 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.784 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.784 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.045 { 00:22:10.045 "cntlid": 47, 00:22:10.045 "qid": 0, 00:22:10.045 "state": "enabled", 00:22:10.045 "listen_address": { 00:22:10.045 "trtype": "TCP", 00:22:10.045 "adrfam": "IPv4", 00:22:10.045 "traddr": "10.0.0.2", 00:22:10.045 "trsvcid": "4420" 00:22:10.045 }, 00:22:10.045 "peer_address": { 00:22:10.045 "trtype": "TCP", 00:22:10.045 "adrfam": "IPv4", 00:22:10.045 "traddr": "10.0.0.1", 00:22:10.045 "trsvcid": "58668" 00:22:10.045 }, 00:22:10.045 "auth": { 00:22:10.045 "state": "completed", 00:22:10.045 "digest": "sha256", 00:22:10.045 "dhgroup": "ffdhe8192" 00:22:10.045 } 00:22:10.045 } 00:22:10.045 ]' 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.045 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.306 01:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:10.876 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.135 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.395 00:22:11.395 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.395 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.395 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.655 { 00:22:11.655 "cntlid": 49, 00:22:11.655 "qid": 0, 00:22:11.655 "state": "enabled", 00:22:11.655 "listen_address": { 00:22:11.655 "trtype": "TCP", 00:22:11.655 "adrfam": "IPv4", 00:22:11.655 "traddr": "10.0.0.2", 00:22:11.655 "trsvcid": "4420" 00:22:11.655 }, 00:22:11.655 "peer_address": { 00:22:11.655 "trtype": "TCP", 00:22:11.655 "adrfam": "IPv4", 00:22:11.655 "traddr": "10.0.0.1", 00:22:11.655 "trsvcid": "58700" 00:22:11.655 }, 00:22:11.655 "auth": { 00:22:11.655 "state": "completed", 00:22:11.655 "digest": "sha384", 00:22:11.655 "dhgroup": "null" 00:22:11.655 } 00:22:11.655 } 00:22:11.655 ]' 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.655 01:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.915 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:12.485 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.746 01:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.007 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.007 01:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.268 { 00:22:13.268 "cntlid": 51, 00:22:13.268 "qid": 0, 00:22:13.268 "state": "enabled", 00:22:13.268 "listen_address": { 00:22:13.268 "trtype": "TCP", 00:22:13.268 "adrfam": "IPv4", 00:22:13.268 "traddr": "10.0.0.2", 00:22:13.268 "trsvcid": "4420" 00:22:13.268 }, 00:22:13.268 "peer_address": { 00:22:13.268 "trtype": "TCP", 00:22:13.268 "adrfam": "IPv4", 00:22:13.268 "traddr": "10.0.0.1", 00:22:13.268 "trsvcid": "58714" 00:22:13.268 }, 00:22:13.268 "auth": { 00:22:13.268 "state": "completed", 00:22:13.268 "digest": "sha384", 00:22:13.268 "dhgroup": "null" 00:22:13.268 } 00:22:13.268 } 00:22:13.268 ]' 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.268 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.528 01:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.097 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.357 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.617 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.617 { 00:22:14.617 "cntlid": 53, 00:22:14.617 "qid": 0, 00:22:14.617 "state": "enabled", 00:22:14.617 "listen_address": { 00:22:14.617 "trtype": "TCP", 00:22:14.617 "adrfam": "IPv4", 00:22:14.617 "traddr": "10.0.0.2", 00:22:14.617 "trsvcid": "4420" 00:22:14.617 }, 00:22:14.617 "peer_address": { 00:22:14.617 "trtype": "TCP", 00:22:14.617 "adrfam": "IPv4", 00:22:14.617 "traddr": "10.0.0.1", 00:22:14.617 "trsvcid": "58736" 00:22:14.617 }, 00:22:14.617 "auth": { 00:22:14.617 "state": "completed", 00:22:14.617 "digest": "sha384", 00:22:14.617 "dhgroup": "null" 00:22:14.617 } 00:22:14.617 } 00:22:14.617 ]' 00:22:14.617 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.878 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.878 01:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.878 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:14.878 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.878 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.878 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.878 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.878 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:15.817 01:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.817 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.076 00:22:16.076 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.076 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.076 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.336 { 00:22:16.336 "cntlid": 55, 00:22:16.336 "qid": 0, 00:22:16.336 "state": "enabled", 00:22:16.336 "listen_address": { 00:22:16.336 "trtype": "TCP", 00:22:16.336 "adrfam": "IPv4", 00:22:16.336 "traddr": "10.0.0.2", 00:22:16.336 "trsvcid": "4420" 00:22:16.336 }, 00:22:16.336 "peer_address": { 00:22:16.336 "trtype": "TCP", 00:22:16.336 "adrfam": "IPv4", 00:22:16.336 "traddr": "10.0.0.1", 00:22:16.336 "trsvcid": "58762" 00:22:16.336 }, 00:22:16.336 "auth": { 00:22:16.336 "state": "completed", 00:22:16.336 "digest": "sha384", 00:22:16.336 "dhgroup": "null" 00:22:16.336 } 00:22:16.336 } 00:22:16.336 ]' 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.336 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.594 01:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.217 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.481 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.482 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.742 00:22:17.742 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.742 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.742 01:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.742 { 00:22:17.742 "cntlid": 57, 00:22:17.742 "qid": 0, 00:22:17.742 "state": "enabled", 00:22:17.742 "listen_address": { 00:22:17.742 "trtype": "TCP", 00:22:17.742 "adrfam": "IPv4", 00:22:17.742 "traddr": "10.0.0.2", 00:22:17.742 "trsvcid": "4420" 00:22:17.742 }, 00:22:17.742 "peer_address": { 00:22:17.742 "trtype": "TCP", 00:22:17.742 "adrfam": "IPv4", 00:22:17.742 "traddr": "10.0.0.1", 00:22:17.742 "trsvcid": "49502" 00:22:17.742 }, 00:22:17.742 "auth": { 00:22:17.742 "state": "completed", 00:22:17.742 "digest": "sha384", 00:22:17.742 "dhgroup": "ffdhe2048" 00:22:17.742 } 00:22:17.742 } 00:22:17.742 ]' 00:22:17.742 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.001 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.260 01:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:18.828 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.088 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.348 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.348 { 00:22:19.348 "cntlid": 59, 00:22:19.348 "qid": 0, 00:22:19.348 "state": "enabled", 00:22:19.348 "listen_address": { 00:22:19.348 "trtype": "TCP", 00:22:19.348 "adrfam": "IPv4", 00:22:19.348 "traddr": "10.0.0.2", 00:22:19.348 "trsvcid": "4420" 00:22:19.348 }, 00:22:19.348 "peer_address": { 00:22:19.348 "trtype": "TCP", 00:22:19.348 "adrfam": "IPv4", 00:22:19.348 "traddr": "10.0.0.1", 00:22:19.348 "trsvcid": "49530" 00:22:19.348 }, 00:22:19.348 "auth": { 00:22:19.348 "state": "completed", 00:22:19.348 "digest": "sha384", 00:22:19.348 "dhgroup": "ffdhe2048" 00:22:19.348 } 00:22:19.348 } 00:22:19.348 ]' 00:22:19.348 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.609 01:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.547 01:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.807 00:22:20.807 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.807 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.807 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.067 { 00:22:21.067 "cntlid": 61, 00:22:21.067 "qid": 0, 00:22:21.067 "state": "enabled", 00:22:21.067 "listen_address": { 00:22:21.067 "trtype": "TCP", 00:22:21.067 "adrfam": "IPv4", 00:22:21.067 "traddr": "10.0.0.2", 00:22:21.067 "trsvcid": "4420" 00:22:21.067 }, 00:22:21.067 "peer_address": { 00:22:21.067 "trtype": "TCP", 00:22:21.067 "adrfam": "IPv4", 00:22:21.067 "traddr": "10.0.0.1", 00:22:21.067 "trsvcid": "49548" 00:22:21.067 }, 00:22:21.067 "auth": { 00:22:21.067 "state": "completed", 00:22:21.067 "digest": "sha384", 00:22:21.067 "dhgroup": "ffdhe2048" 00:22:21.067 } 00:22:21.067 } 00:22:21.067 ]' 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.067 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.326 01:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.898 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.158 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.418 00:22:22.418 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.418 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.418 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.679 { 00:22:22.679 "cntlid": 63, 00:22:22.679 "qid": 0, 00:22:22.679 "state": "enabled", 00:22:22.679 "listen_address": { 00:22:22.679 "trtype": "TCP", 00:22:22.679 "adrfam": "IPv4", 00:22:22.679 "traddr": "10.0.0.2", 00:22:22.679 "trsvcid": "4420" 00:22:22.679 }, 00:22:22.679 "peer_address": { 00:22:22.679 "trtype": "TCP", 00:22:22.679 "adrfam": "IPv4", 00:22:22.679 "traddr": "10.0.0.1", 00:22:22.679 "trsvcid": "49578" 00:22:22.679 }, 00:22:22.679 "auth": { 00:22:22.679 "state": "completed", 00:22:22.679 "digest": "sha384", 00:22:22.679 "dhgroup": "ffdhe2048" 00:22:22.679 } 00:22:22.679 } 00:22:22.679 ]' 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.679 01:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.939 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.511 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.771 01:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.032 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.032 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.032 { 00:22:24.032 "cntlid": 65, 00:22:24.032 "qid": 0, 00:22:24.032 "state": "enabled", 00:22:24.032 "listen_address": { 00:22:24.032 "trtype": "TCP", 00:22:24.032 "adrfam": "IPv4", 00:22:24.032 "traddr": "10.0.0.2", 00:22:24.032 "trsvcid": "4420" 00:22:24.032 }, 00:22:24.032 "peer_address": { 00:22:24.032 "trtype": "TCP", 00:22:24.032 "adrfam": "IPv4", 00:22:24.032 "traddr": "10.0.0.1", 00:22:24.032 "trsvcid": "49592" 00:22:24.032 }, 00:22:24.032 "auth": { 00:22:24.032 "state": "completed", 00:22:24.032 "digest": "sha384", 00:22:24.032 "dhgroup": "ffdhe3072" 00:22:24.032 } 00:22:24.032 } 00:22:24.032 ]' 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.293 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.553 01:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.124 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.385 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.645 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.645 { 00:22:25.645 "cntlid": 67, 00:22:25.645 "qid": 0, 00:22:25.645 "state": "enabled", 00:22:25.645 "listen_address": { 00:22:25.645 "trtype": "TCP", 00:22:25.645 "adrfam": "IPv4", 00:22:25.645 "traddr": "10.0.0.2", 00:22:25.645 "trsvcid": "4420" 00:22:25.645 }, 00:22:25.645 "peer_address": { 00:22:25.645 "trtype": "TCP", 00:22:25.645 "adrfam": "IPv4", 00:22:25.645 "traddr": "10.0.0.1", 00:22:25.645 "trsvcid": "49616" 00:22:25.645 }, 00:22:25.645 "auth": { 00:22:25.645 "state": "completed", 00:22:25.645 "digest": "sha384", 00:22:25.645 "dhgroup": "ffdhe3072" 00:22:25.645 } 00:22:25.645 } 00:22:25.645 ]' 00:22:25.645 01:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.906 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.167 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:26.738 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.738 01:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.738 01:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.738 01:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.738 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.738 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.738 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:26.738 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.001 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.261 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.261 { 00:22:27.261 "cntlid": 69, 00:22:27.261 "qid": 0, 00:22:27.261 "state": "enabled", 00:22:27.261 "listen_address": { 00:22:27.261 "trtype": "TCP", 00:22:27.261 "adrfam": "IPv4", 00:22:27.261 "traddr": "10.0.0.2", 00:22:27.261 "trsvcid": "4420" 00:22:27.261 }, 00:22:27.261 "peer_address": { 00:22:27.261 "trtype": "TCP", 00:22:27.261 "adrfam": "IPv4", 00:22:27.261 "traddr": "10.0.0.1", 00:22:27.261 "trsvcid": "56762" 00:22:27.261 }, 00:22:27.261 "auth": { 00:22:27.261 "state": "completed", 00:22:27.261 "digest": "sha384", 00:22:27.261 "dhgroup": "ffdhe3072" 00:22:27.261 } 00:22:27.261 } 00:22:27.261 ]' 00:22:27.261 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.521 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.521 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.522 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:27.522 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.522 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.522 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.522 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.781 01:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.350 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.610 01:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.870 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.870 { 00:22:28.870 "cntlid": 71, 00:22:28.870 "qid": 0, 00:22:28.870 "state": "enabled", 00:22:28.870 "listen_address": { 00:22:28.870 "trtype": "TCP", 00:22:28.870 "adrfam": "IPv4", 00:22:28.870 "traddr": "10.0.0.2", 00:22:28.870 "trsvcid": "4420" 00:22:28.870 }, 00:22:28.870 "peer_address": { 00:22:28.870 "trtype": "TCP", 00:22:28.870 "adrfam": "IPv4", 00:22:28.870 "traddr": "10.0.0.1", 00:22:28.870 "trsvcid": "56780" 00:22:28.870 }, 00:22:28.870 "auth": { 00:22:28.870 "state": "completed", 00:22:28.870 "digest": "sha384", 00:22:28.870 "dhgroup": "ffdhe3072" 00:22:28.870 } 00:22:28.870 } 00:22:28.870 ]' 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.870 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.129 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.129 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.129 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.129 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.129 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.130 01:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:30.066 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.066 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.066 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.067 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.326 00:22:30.326 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.326 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.326 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.587 { 00:22:30.587 "cntlid": 73, 00:22:30.587 "qid": 0, 00:22:30.587 "state": "enabled", 00:22:30.587 "listen_address": { 00:22:30.587 "trtype": "TCP", 00:22:30.587 "adrfam": "IPv4", 00:22:30.587 "traddr": "10.0.0.2", 00:22:30.587 "trsvcid": "4420" 00:22:30.587 }, 00:22:30.587 "peer_address": { 00:22:30.587 "trtype": "TCP", 00:22:30.587 "adrfam": "IPv4", 00:22:30.587 "traddr": "10.0.0.1", 00:22:30.587 "trsvcid": "56822" 00:22:30.587 }, 00:22:30.587 "auth": { 00:22:30.587 "state": "completed", 00:22:30.587 "digest": "sha384", 00:22:30.587 "dhgroup": "ffdhe4096" 00:22:30.587 } 00:22:30.587 } 00:22:30.587 ]' 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.587 01:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.846 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.787 01:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.049 00:22:32.049 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.049 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.049 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.310 { 00:22:32.310 "cntlid": 75, 00:22:32.310 "qid": 0, 00:22:32.310 "state": "enabled", 00:22:32.310 "listen_address": { 00:22:32.310 "trtype": "TCP", 00:22:32.310 "adrfam": "IPv4", 00:22:32.310 "traddr": "10.0.0.2", 00:22:32.310 "trsvcid": "4420" 00:22:32.310 }, 00:22:32.310 "peer_address": { 00:22:32.310 "trtype": "TCP", 00:22:32.310 "adrfam": "IPv4", 00:22:32.310 "traddr": "10.0.0.1", 00:22:32.310 "trsvcid": "56858" 00:22:32.310 }, 00:22:32.310 "auth": { 00:22:32.310 "state": "completed", 00:22:32.310 "digest": "sha384", 00:22:32.310 "dhgroup": "ffdhe4096" 00:22:32.310 } 00:22:32.310 } 00:22:32.310 ]' 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.310 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.570 01:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.141 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.402 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.663 00:22:33.663 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.663 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.663 01:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.923 { 00:22:33.923 "cntlid": 77, 00:22:33.923 "qid": 0, 00:22:33.923 "state": "enabled", 00:22:33.923 "listen_address": { 00:22:33.923 "trtype": "TCP", 00:22:33.923 "adrfam": "IPv4", 00:22:33.923 "traddr": "10.0.0.2", 00:22:33.923 "trsvcid": "4420" 00:22:33.923 }, 00:22:33.923 "peer_address": { 00:22:33.923 "trtype": "TCP", 00:22:33.923 "adrfam": "IPv4", 00:22:33.923 "traddr": "10.0.0.1", 00:22:33.923 "trsvcid": "56880" 00:22:33.923 }, 00:22:33.923 "auth": { 00:22:33.923 "state": "completed", 00:22:33.923 "digest": "sha384", 00:22:33.923 "dhgroup": "ffdhe4096" 00:22:33.923 } 00:22:33.923 } 00:22:33.923 ]' 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.923 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.182 01:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.750 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.010 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.270 00:22:35.270 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.270 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.270 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.531 { 00:22:35.531 "cntlid": 79, 00:22:35.531 "qid": 0, 00:22:35.531 "state": "enabled", 00:22:35.531 "listen_address": { 00:22:35.531 "trtype": "TCP", 00:22:35.531 "adrfam": "IPv4", 00:22:35.531 "traddr": "10.0.0.2", 00:22:35.531 "trsvcid": "4420" 00:22:35.531 }, 00:22:35.531 "peer_address": { 00:22:35.531 "trtype": "TCP", 00:22:35.531 "adrfam": "IPv4", 00:22:35.531 "traddr": "10.0.0.1", 00:22:35.531 "trsvcid": "56906" 00:22:35.531 }, 00:22:35.531 "auth": { 00:22:35.531 "state": "completed", 00:22:35.531 "digest": "sha384", 00:22:35.531 "dhgroup": "ffdhe4096" 00:22:35.531 } 00:22:35.531 } 00:22:35.531 ]' 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.531 01:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.792 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:36.363 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.624 01:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.884 00:22:36.884 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.884 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.884 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.145 { 00:22:37.145 "cntlid": 81, 00:22:37.145 "qid": 0, 00:22:37.145 "state": "enabled", 00:22:37.145 "listen_address": { 00:22:37.145 "trtype": "TCP", 00:22:37.145 "adrfam": "IPv4", 00:22:37.145 "traddr": "10.0.0.2", 00:22:37.145 "trsvcid": "4420" 00:22:37.145 }, 00:22:37.145 "peer_address": { 00:22:37.145 "trtype": "TCP", 00:22:37.145 "adrfam": "IPv4", 00:22:37.145 "traddr": "10.0.0.1", 00:22:37.145 "trsvcid": "56938" 00:22:37.145 }, 00:22:37.145 "auth": { 00:22:37.145 "state": "completed", 00:22:37.145 "digest": "sha384", 00:22:37.145 "dhgroup": "ffdhe6144" 00:22:37.145 } 00:22:37.145 } 00:22:37.145 ]' 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.145 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.406 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:37.406 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.406 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.406 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.406 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.406 01:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.350 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.610 00:22:38.610 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.610 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.610 01:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.872 { 00:22:38.872 "cntlid": 83, 00:22:38.872 "qid": 0, 00:22:38.872 "state": "enabled", 00:22:38.872 "listen_address": { 00:22:38.872 "trtype": "TCP", 00:22:38.872 "adrfam": "IPv4", 00:22:38.872 "traddr": "10.0.0.2", 00:22:38.872 "trsvcid": "4420" 00:22:38.872 }, 00:22:38.872 "peer_address": { 00:22:38.872 "trtype": "TCP", 00:22:38.872 "adrfam": "IPv4", 00:22:38.872 "traddr": "10.0.0.1", 00:22:38.872 "trsvcid": "51406" 00:22:38.872 }, 00:22:38.872 "auth": { 00:22:38.872 "state": "completed", 00:22:38.872 "digest": "sha384", 00:22:38.872 "dhgroup": "ffdhe6144" 00:22:38.872 } 00:22:38.872 } 00:22:38.872 ]' 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.872 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.133 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:39.133 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.133 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.133 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.133 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.133 01:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.076 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.337 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.599 { 00:22:40.599 "cntlid": 85, 00:22:40.599 "qid": 0, 00:22:40.599 "state": "enabled", 00:22:40.599 "listen_address": { 00:22:40.599 "trtype": "TCP", 00:22:40.599 "adrfam": "IPv4", 00:22:40.599 "traddr": "10.0.0.2", 00:22:40.599 "trsvcid": "4420" 00:22:40.599 }, 00:22:40.599 "peer_address": { 00:22:40.599 "trtype": "TCP", 00:22:40.599 "adrfam": "IPv4", 00:22:40.599 "traddr": "10.0.0.1", 00:22:40.599 "trsvcid": "51440" 00:22:40.599 }, 00:22:40.599 "auth": { 00:22:40.599 "state": "completed", 00:22:40.599 "digest": "sha384", 00:22:40.599 "dhgroup": "ffdhe6144" 00:22:40.599 } 00:22:40.599 } 00:22:40.599 ]' 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.599 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.860 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:40.860 01:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.860 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.860 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.860 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.860 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.804 01:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.804 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.805 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.064 00:22:42.064 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.064 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.064 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.325 { 00:22:42.325 "cntlid": 87, 00:22:42.325 "qid": 0, 00:22:42.325 "state": "enabled", 00:22:42.325 "listen_address": { 00:22:42.325 "trtype": "TCP", 00:22:42.325 "adrfam": "IPv4", 00:22:42.325 "traddr": "10.0.0.2", 00:22:42.325 "trsvcid": "4420" 00:22:42.325 }, 00:22:42.325 "peer_address": { 00:22:42.325 "trtype": "TCP", 00:22:42.325 "adrfam": "IPv4", 00:22:42.325 "traddr": "10.0.0.1", 00:22:42.325 "trsvcid": "51460" 00:22:42.325 }, 00:22:42.325 "auth": { 00:22:42.325 "state": "completed", 00:22:42.325 "digest": "sha384", 00:22:42.325 "dhgroup": "ffdhe6144" 00:22:42.325 } 00:22:42.325 } 00:22:42.325 ]' 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.325 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.587 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.587 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.587 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.587 01:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.531 01:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.102 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.102 01:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:44.391 { 00:22:44.391 "cntlid": 89, 00:22:44.391 "qid": 0, 00:22:44.391 "state": "enabled", 00:22:44.391 "listen_address": { 00:22:44.391 "trtype": "TCP", 00:22:44.391 "adrfam": "IPv4", 00:22:44.391 "traddr": "10.0.0.2", 00:22:44.391 "trsvcid": "4420" 00:22:44.391 }, 00:22:44.391 "peer_address": { 00:22:44.391 "trtype": "TCP", 00:22:44.391 "adrfam": "IPv4", 00:22:44.391 "traddr": "10.0.0.1", 00:22:44.391 "trsvcid": "51490" 00:22:44.391 }, 00:22:44.391 "auth": { 00:22:44.391 "state": "completed", 00:22:44.391 "digest": "sha384", 00:22:44.391 "dhgroup": "ffdhe8192" 00:22:44.391 } 00:22:44.391 } 00:22:44.391 ]' 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.391 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.685 01:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.261 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.523 01:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.094 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.094 { 00:22:46.094 "cntlid": 91, 00:22:46.094 "qid": 0, 00:22:46.094 "state": "enabled", 00:22:46.094 "listen_address": { 00:22:46.094 "trtype": "TCP", 00:22:46.094 "adrfam": "IPv4", 00:22:46.094 "traddr": "10.0.0.2", 00:22:46.094 "trsvcid": "4420" 00:22:46.094 }, 00:22:46.094 "peer_address": { 00:22:46.094 "trtype": "TCP", 00:22:46.094 "adrfam": "IPv4", 00:22:46.094 "traddr": "10.0.0.1", 00:22:46.094 "trsvcid": "51526" 00:22:46.094 }, 00:22:46.094 "auth": { 00:22:46.094 "state": "completed", 00:22:46.094 "digest": "sha384", 00:22:46.094 "dhgroup": "ffdhe8192" 00:22:46.094 } 00:22:46.094 } 00:22:46.094 ]' 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.094 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:46.355 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.355 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.355 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.355 01:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.295 01:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.866 00:22:47.867 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.867 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.867 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.128 { 00:22:48.128 "cntlid": 93, 00:22:48.128 "qid": 0, 00:22:48.128 "state": "enabled", 00:22:48.128 "listen_address": { 00:22:48.128 "trtype": "TCP", 00:22:48.128 "adrfam": "IPv4", 00:22:48.128 "traddr": "10.0.0.2", 00:22:48.128 "trsvcid": "4420" 00:22:48.128 }, 00:22:48.128 "peer_address": { 00:22:48.128 "trtype": "TCP", 00:22:48.128 "adrfam": "IPv4", 00:22:48.128 "traddr": "10.0.0.1", 00:22:48.128 "trsvcid": "51658" 00:22:48.128 }, 00:22:48.128 "auth": { 00:22:48.128 "state": "completed", 00:22:48.128 "digest": "sha384", 00:22:48.128 "dhgroup": "ffdhe8192" 00:22:48.128 } 00:22:48.128 } 00:22:48.128 ]' 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.128 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.389 01:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.961 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.223 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.794 00:22:49.794 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.794 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.794 01:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.794 { 00:22:49.794 "cntlid": 95, 00:22:49.794 "qid": 0, 00:22:49.794 "state": "enabled", 00:22:49.794 "listen_address": { 00:22:49.794 "trtype": "TCP", 00:22:49.794 "adrfam": "IPv4", 00:22:49.794 "traddr": "10.0.0.2", 00:22:49.794 "trsvcid": "4420" 00:22:49.794 }, 00:22:49.794 "peer_address": { 00:22:49.794 "trtype": "TCP", 00:22:49.794 "adrfam": "IPv4", 00:22:49.794 "traddr": "10.0.0.1", 00:22:49.794 "trsvcid": "51680" 00:22:49.794 }, 00:22:49.794 "auth": { 00:22:49.794 "state": "completed", 00:22:49.794 "digest": "sha384", 00:22:49.794 "dhgroup": "ffdhe8192" 00:22:49.794 } 00:22:49.794 } 00:22:49.794 ]' 00:22:49.794 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.056 01:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.000 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.263 00:22:51.263 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.263 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.263 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.524 { 00:22:51.524 "cntlid": 97, 00:22:51.524 "qid": 0, 00:22:51.524 "state": "enabled", 00:22:51.524 "listen_address": { 00:22:51.524 "trtype": "TCP", 00:22:51.524 "adrfam": "IPv4", 00:22:51.524 "traddr": "10.0.0.2", 00:22:51.524 "trsvcid": "4420" 00:22:51.524 }, 00:22:51.524 "peer_address": { 00:22:51.524 "trtype": "TCP", 00:22:51.524 "adrfam": "IPv4", 00:22:51.524 "traddr": "10.0.0.1", 00:22:51.524 "trsvcid": "51708" 00:22:51.524 }, 00:22:51.524 "auth": { 00:22:51.524 "state": "completed", 00:22:51.524 "digest": "sha512", 00:22:51.524 "dhgroup": "null" 00:22:51.524 } 00:22:51.524 } 00:22:51.524 ]' 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.524 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.784 01:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.355 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.616 01:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.877 00:22:52.877 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.877 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.877 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.138 { 00:22:53.138 "cntlid": 99, 00:22:53.138 "qid": 0, 00:22:53.138 "state": "enabled", 00:22:53.138 "listen_address": { 00:22:53.138 "trtype": "TCP", 00:22:53.138 "adrfam": "IPv4", 00:22:53.138 "traddr": "10.0.0.2", 00:22:53.138 "trsvcid": "4420" 00:22:53.138 }, 00:22:53.138 "peer_address": { 00:22:53.138 "trtype": "TCP", 00:22:53.138 "adrfam": "IPv4", 00:22:53.138 "traddr": "10.0.0.1", 00:22:53.138 "trsvcid": "51738" 00:22:53.138 }, 00:22:53.138 "auth": { 00:22:53.138 "state": "completed", 00:22:53.138 "digest": "sha512", 00:22:53.138 "dhgroup": "null" 00:22:53.138 } 00:22:53.138 } 00:22:53.138 ]' 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.138 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.399 01:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:22:53.971 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:53.972 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.232 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.233 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.233 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.233 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.494 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.494 { 00:22:54.494 "cntlid": 101, 00:22:54.494 "qid": 0, 00:22:54.494 "state": "enabled", 00:22:54.494 "listen_address": { 00:22:54.494 "trtype": "TCP", 00:22:54.494 "adrfam": "IPv4", 00:22:54.494 "traddr": "10.0.0.2", 00:22:54.494 "trsvcid": "4420" 00:22:54.494 }, 00:22:54.494 "peer_address": { 00:22:54.494 "trtype": "TCP", 00:22:54.494 "adrfam": "IPv4", 00:22:54.494 "traddr": "10.0.0.1", 00:22:54.494 "trsvcid": "51778" 00:22:54.494 }, 00:22:54.494 "auth": { 00:22:54.494 "state": "completed", 00:22:54.494 "digest": "sha512", 00:22:54.494 "dhgroup": "null" 00:22:54.494 } 00:22:54.494 } 00:22:54.494 ]' 00:22:54.494 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.756 01:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.756 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.699 01:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.960 00:22:55.960 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.960 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.960 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.221 { 00:22:56.221 "cntlid": 103, 00:22:56.221 "qid": 0, 00:22:56.221 "state": "enabled", 00:22:56.221 "listen_address": { 00:22:56.221 "trtype": "TCP", 00:22:56.221 "adrfam": "IPv4", 00:22:56.221 "traddr": "10.0.0.2", 00:22:56.221 "trsvcid": "4420" 00:22:56.221 }, 00:22:56.221 "peer_address": { 00:22:56.221 "trtype": "TCP", 00:22:56.221 "adrfam": "IPv4", 00:22:56.221 "traddr": "10.0.0.1", 00:22:56.221 "trsvcid": "51812" 00:22:56.221 }, 00:22:56.221 "auth": { 00:22:56.221 "state": "completed", 00:22:56.221 "digest": "sha512", 00:22:56.221 "dhgroup": "null" 00:22:56.221 } 00:22:56.221 } 00:22:56.221 ]' 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.221 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.480 01:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:22:57.050 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.050 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:57.050 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.050 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.310 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.571 00:22:57.571 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.571 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.571 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.832 { 00:22:57.832 "cntlid": 105, 00:22:57.832 "qid": 0, 00:22:57.832 "state": "enabled", 00:22:57.832 "listen_address": { 00:22:57.832 "trtype": "TCP", 00:22:57.832 "adrfam": "IPv4", 00:22:57.832 "traddr": "10.0.0.2", 00:22:57.832 "trsvcid": "4420" 00:22:57.832 }, 00:22:57.832 "peer_address": { 00:22:57.832 "trtype": "TCP", 00:22:57.832 "adrfam": "IPv4", 00:22:57.832 "traddr": "10.0.0.1", 00:22:57.832 "trsvcid": "58404" 00:22:57.832 }, 00:22:57.832 "auth": { 00:22:57.832 "state": "completed", 00:22:57.832 "digest": "sha512", 00:22:57.832 "dhgroup": "ffdhe2048" 00:22:57.832 } 00:22:57.832 } 00:22:57.832 ]' 00:22:57.832 01:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.832 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.832 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:57.832 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:57.833 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.833 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.833 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.833 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.092 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:22:58.664 01:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.664 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.924 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.925 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.925 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.185 00:22:59.185 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.185 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.185 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.445 { 00:22:59.445 "cntlid": 107, 00:22:59.445 "qid": 0, 00:22:59.445 "state": "enabled", 00:22:59.445 "listen_address": { 00:22:59.445 "trtype": "TCP", 00:22:59.445 "adrfam": "IPv4", 00:22:59.445 "traddr": "10.0.0.2", 00:22:59.445 "trsvcid": "4420" 00:22:59.445 }, 00:22:59.445 "peer_address": { 00:22:59.445 "trtype": "TCP", 00:22:59.445 "adrfam": "IPv4", 00:22:59.445 "traddr": "10.0.0.1", 00:22:59.445 "trsvcid": "58446" 00:22:59.445 }, 00:22:59.445 "auth": { 00:22:59.445 "state": "completed", 00:22:59.445 "digest": "sha512", 00:22:59.445 "dhgroup": "ffdhe2048" 00:22:59.445 } 00:22:59.445 } 00:22:59.445 ]' 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.445 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.706 01:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:00.279 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.540 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.801 00:23:00.801 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.801 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.801 01:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.801 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.801 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.801 01:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.801 01:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.064 { 00:23:01.064 "cntlid": 109, 00:23:01.064 "qid": 0, 00:23:01.064 "state": "enabled", 00:23:01.064 "listen_address": { 00:23:01.064 "trtype": "TCP", 00:23:01.064 "adrfam": "IPv4", 00:23:01.064 "traddr": "10.0.0.2", 00:23:01.064 "trsvcid": "4420" 00:23:01.064 }, 00:23:01.064 "peer_address": { 00:23:01.064 "trtype": "TCP", 00:23:01.064 "adrfam": "IPv4", 00:23:01.064 "traddr": "10.0.0.1", 00:23:01.064 "trsvcid": "58480" 00:23:01.064 }, 00:23:01.064 "auth": { 00:23:01.064 "state": "completed", 00:23:01.064 "digest": "sha512", 00:23:01.064 "dhgroup": "ffdhe2048" 00:23:01.064 } 00:23:01.064 } 00:23:01.064 ]' 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.064 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.325 01:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:01.898 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.159 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.421 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.421 { 00:23:02.421 "cntlid": 111, 00:23:02.421 "qid": 0, 00:23:02.421 "state": "enabled", 00:23:02.421 "listen_address": { 00:23:02.421 "trtype": "TCP", 00:23:02.421 "adrfam": "IPv4", 00:23:02.421 "traddr": "10.0.0.2", 00:23:02.421 "trsvcid": "4420" 00:23:02.421 }, 00:23:02.421 "peer_address": { 00:23:02.421 "trtype": "TCP", 00:23:02.421 "adrfam": "IPv4", 00:23:02.421 "traddr": "10.0.0.1", 00:23:02.421 "trsvcid": "58500" 00:23:02.421 }, 00:23:02.421 "auth": { 00:23:02.421 "state": "completed", 00:23:02.421 "digest": "sha512", 00:23:02.421 "dhgroup": "ffdhe2048" 00:23:02.421 } 00:23:02.421 } 00:23:02.421 ]' 00:23:02.421 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.682 01:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.944 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:03.515 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.776 01:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.036 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.036 { 00:23:04.036 "cntlid": 113, 00:23:04.036 "qid": 0, 00:23:04.036 "state": "enabled", 00:23:04.036 "listen_address": { 00:23:04.036 "trtype": "TCP", 00:23:04.036 "adrfam": "IPv4", 00:23:04.036 "traddr": "10.0.0.2", 00:23:04.036 "trsvcid": "4420" 00:23:04.036 }, 00:23:04.036 "peer_address": { 00:23:04.036 "trtype": "TCP", 00:23:04.036 "adrfam": "IPv4", 00:23:04.036 "traddr": "10.0.0.1", 00:23:04.036 "trsvcid": "58530" 00:23:04.036 }, 00:23:04.036 "auth": { 00:23:04.036 "state": "completed", 00:23:04.036 "digest": "sha512", 00:23:04.036 "dhgroup": "ffdhe3072" 00:23:04.036 } 00:23:04.036 } 00:23:04.036 ]' 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.036 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.297 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:04.297 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.297 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.297 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.297 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.297 01:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.252 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.253 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.253 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.253 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.513 00:23:05.513 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.513 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.513 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.775 { 00:23:05.775 "cntlid": 115, 00:23:05.775 "qid": 0, 00:23:05.775 "state": "enabled", 00:23:05.775 "listen_address": { 00:23:05.775 "trtype": "TCP", 00:23:05.775 "adrfam": "IPv4", 00:23:05.775 "traddr": "10.0.0.2", 00:23:05.775 "trsvcid": "4420" 00:23:05.775 }, 00:23:05.775 "peer_address": { 00:23:05.775 "trtype": "TCP", 00:23:05.775 "adrfam": "IPv4", 00:23:05.775 "traddr": "10.0.0.1", 00:23:05.775 "trsvcid": "58556" 00:23:05.775 }, 00:23:05.775 "auth": { 00:23:05.775 "state": "completed", 00:23:05.775 "digest": "sha512", 00:23:05.775 "dhgroup": "ffdhe3072" 00:23:05.775 } 00:23:05.775 } 00:23:05.775 ]' 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.775 01:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.775 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:05.775 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.775 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.775 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.775 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.035 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:23:06.606 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.606 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:06.606 01:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.606 01:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.874 01:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.874 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.874 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.874 01:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.874 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.138 00:23:07.138 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.138 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.138 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.398 { 00:23:07.398 "cntlid": 117, 00:23:07.398 "qid": 0, 00:23:07.398 "state": "enabled", 00:23:07.398 "listen_address": { 00:23:07.398 "trtype": "TCP", 00:23:07.398 "adrfam": "IPv4", 00:23:07.398 "traddr": "10.0.0.2", 00:23:07.398 "trsvcid": "4420" 00:23:07.398 }, 00:23:07.398 "peer_address": { 00:23:07.398 "trtype": "TCP", 00:23:07.398 "adrfam": "IPv4", 00:23:07.398 "traddr": "10.0.0.1", 00:23:07.398 "trsvcid": "59678" 00:23:07.398 }, 00:23:07.398 "auth": { 00:23:07.398 "state": "completed", 00:23:07.398 "digest": "sha512", 00:23:07.398 "dhgroup": "ffdhe3072" 00:23:07.398 } 00:23:07.398 } 00:23:07.398 ]' 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.398 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.657 01:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.228 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.487 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.747 00:23:08.747 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.747 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.747 01:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.008 { 00:23:09.008 "cntlid": 119, 00:23:09.008 "qid": 0, 00:23:09.008 "state": "enabled", 00:23:09.008 "listen_address": { 00:23:09.008 "trtype": "TCP", 00:23:09.008 "adrfam": "IPv4", 00:23:09.008 "traddr": "10.0.0.2", 00:23:09.008 "trsvcid": "4420" 00:23:09.008 }, 00:23:09.008 "peer_address": { 00:23:09.008 "trtype": "TCP", 00:23:09.008 "adrfam": "IPv4", 00:23:09.008 "traddr": "10.0.0.1", 00:23:09.008 "trsvcid": "59714" 00:23:09.008 }, 00:23:09.008 "auth": { 00:23:09.008 "state": "completed", 00:23:09.008 "digest": "sha512", 00:23:09.008 "dhgroup": "ffdhe3072" 00:23:09.008 } 00:23:09.008 } 00:23:09.008 ]' 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.008 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.268 01:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:09.837 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.097 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.357 00:23:10.357 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.357 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.357 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.617 { 00:23:10.617 "cntlid": 121, 00:23:10.617 "qid": 0, 00:23:10.617 "state": "enabled", 00:23:10.617 "listen_address": { 00:23:10.617 "trtype": "TCP", 00:23:10.617 "adrfam": "IPv4", 00:23:10.617 "traddr": "10.0.0.2", 00:23:10.617 "trsvcid": "4420" 00:23:10.617 }, 00:23:10.617 "peer_address": { 00:23:10.617 "trtype": "TCP", 00:23:10.617 "adrfam": "IPv4", 00:23:10.617 "traddr": "10.0.0.1", 00:23:10.617 "trsvcid": "59752" 00:23:10.617 }, 00:23:10.617 "auth": { 00:23:10.617 "state": "completed", 00:23:10.617 "digest": "sha512", 00:23:10.617 "dhgroup": "ffdhe4096" 00:23:10.617 } 00:23:10.617 } 00:23:10.617 ]' 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.617 01:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.877 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:23:11.535 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.535 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.536 01:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.536 01:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.536 01:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.536 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.536 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.536 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.795 01:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.054 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:12.054 { 00:23:12.054 "cntlid": 123, 00:23:12.054 "qid": 0, 00:23:12.054 "state": "enabled", 00:23:12.054 "listen_address": { 00:23:12.054 "trtype": "TCP", 00:23:12.054 "adrfam": "IPv4", 00:23:12.054 "traddr": "10.0.0.2", 00:23:12.054 "trsvcid": "4420" 00:23:12.054 }, 00:23:12.054 "peer_address": { 00:23:12.054 "trtype": "TCP", 00:23:12.054 "adrfam": "IPv4", 00:23:12.054 "traddr": "10.0.0.1", 00:23:12.054 "trsvcid": "59768" 00:23:12.054 }, 00:23:12.054 "auth": { 00:23:12.054 "state": "completed", 00:23:12.054 "digest": "sha512", 00:23:12.054 "dhgroup": "ffdhe4096" 00:23:12.054 } 00:23:12.054 } 00:23:12.054 ]' 00:23:12.054 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.314 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.314 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.314 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:12.314 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.314 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.314 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.315 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.574 01:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:23:13.143 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.143 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.144 01:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.144 01:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.144 01:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.144 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.144 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.144 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.403 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.662 00:23:13.662 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.662 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.662 01:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.922 { 00:23:13.922 "cntlid": 125, 00:23:13.922 "qid": 0, 00:23:13.922 "state": "enabled", 00:23:13.922 "listen_address": { 00:23:13.922 "trtype": "TCP", 00:23:13.922 "adrfam": "IPv4", 00:23:13.922 "traddr": "10.0.0.2", 00:23:13.922 "trsvcid": "4420" 00:23:13.922 }, 00:23:13.922 "peer_address": { 00:23:13.922 "trtype": "TCP", 00:23:13.922 "adrfam": "IPv4", 00:23:13.922 "traddr": "10.0.0.1", 00:23:13.922 "trsvcid": "59804" 00:23:13.922 }, 00:23:13.922 "auth": { 00:23:13.922 "state": "completed", 00:23:13.922 "digest": "sha512", 00:23:13.922 "dhgroup": "ffdhe4096" 00:23:13.922 } 00:23:13.922 } 00:23:13.922 ]' 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.922 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.182 01:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.751 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:14.752 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.011 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.271 00:23:15.271 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.271 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.271 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.531 { 00:23:15.531 "cntlid": 127, 00:23:15.531 "qid": 0, 00:23:15.531 "state": "enabled", 00:23:15.531 "listen_address": { 00:23:15.531 "trtype": "TCP", 00:23:15.531 "adrfam": "IPv4", 00:23:15.531 "traddr": "10.0.0.2", 00:23:15.531 "trsvcid": "4420" 00:23:15.531 }, 00:23:15.531 "peer_address": { 00:23:15.531 "trtype": "TCP", 00:23:15.531 "adrfam": "IPv4", 00:23:15.531 "traddr": "10.0.0.1", 00:23:15.531 "trsvcid": "59822" 00:23:15.531 }, 00:23:15.531 "auth": { 00:23:15.531 "state": "completed", 00:23:15.531 "digest": "sha512", 00:23:15.531 "dhgroup": "ffdhe4096" 00:23:15.531 } 00:23:15.531 } 00:23:15.531 ]' 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.531 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.791 01:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.361 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.621 01:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.880 00:23:16.880 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:16.880 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:16.880 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.140 { 00:23:17.140 "cntlid": 129, 00:23:17.140 "qid": 0, 00:23:17.140 "state": "enabled", 00:23:17.140 "listen_address": { 00:23:17.140 "trtype": "TCP", 00:23:17.140 "adrfam": "IPv4", 00:23:17.140 "traddr": "10.0.0.2", 00:23:17.140 "trsvcid": "4420" 00:23:17.140 }, 00:23:17.140 "peer_address": { 00:23:17.140 "trtype": "TCP", 00:23:17.140 "adrfam": "IPv4", 00:23:17.140 "traddr": "10.0.0.1", 00:23:17.140 "trsvcid": "59844" 00:23:17.140 }, 00:23:17.140 "auth": { 00:23:17.140 "state": "completed", 00:23:17.140 "digest": "sha512", 00:23:17.140 "dhgroup": "ffdhe6144" 00:23:17.140 } 00:23:17.140 } 00:23:17.140 ]' 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:17.140 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.400 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.400 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.400 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.400 01:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.341 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.601 00:23:18.601 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.601 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.601 01:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.862 { 00:23:18.862 "cntlid": 131, 00:23:18.862 "qid": 0, 00:23:18.862 "state": "enabled", 00:23:18.862 "listen_address": { 00:23:18.862 "trtype": "TCP", 00:23:18.862 "adrfam": "IPv4", 00:23:18.862 "traddr": "10.0.0.2", 00:23:18.862 "trsvcid": "4420" 00:23:18.862 }, 00:23:18.862 "peer_address": { 00:23:18.862 "trtype": "TCP", 00:23:18.862 "adrfam": "IPv4", 00:23:18.862 "traddr": "10.0.0.1", 00:23:18.862 "trsvcid": "58258" 00:23:18.862 }, 00:23:18.862 "auth": { 00:23:18.862 "state": "completed", 00:23:18.862 "digest": "sha512", 00:23:18.862 "dhgroup": "ffdhe6144" 00:23:18.862 } 00:23:18.862 } 00:23:18.862 ]' 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:18.862 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.122 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.122 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.122 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.122 01:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.062 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.322 00:23:20.322 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.322 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.322 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.582 { 00:23:20.582 "cntlid": 133, 00:23:20.582 "qid": 0, 00:23:20.582 "state": "enabled", 00:23:20.582 "listen_address": { 00:23:20.582 "trtype": "TCP", 00:23:20.582 "adrfam": "IPv4", 00:23:20.582 "traddr": "10.0.0.2", 00:23:20.582 "trsvcid": "4420" 00:23:20.582 }, 00:23:20.582 "peer_address": { 00:23:20.582 "trtype": "TCP", 00:23:20.582 "adrfam": "IPv4", 00:23:20.582 "traddr": "10.0.0.1", 00:23:20.582 "trsvcid": "58284" 00:23:20.582 }, 00:23:20.582 "auth": { 00:23:20.582 "state": "completed", 00:23:20.582 "digest": "sha512", 00:23:20.582 "dhgroup": "ffdhe6144" 00:23:20.582 } 00:23:20.582 } 00:23:20.582 ]' 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:20.582 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.843 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.843 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.843 01:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.843 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:21.782 01:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:22.041 00:23:22.041 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.041 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.041 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.301 { 00:23:22.301 "cntlid": 135, 00:23:22.301 "qid": 0, 00:23:22.301 "state": "enabled", 00:23:22.301 "listen_address": { 00:23:22.301 "trtype": "TCP", 00:23:22.301 "adrfam": "IPv4", 00:23:22.301 "traddr": "10.0.0.2", 00:23:22.301 "trsvcid": "4420" 00:23:22.301 }, 00:23:22.301 "peer_address": { 00:23:22.301 "trtype": "TCP", 00:23:22.301 "adrfam": "IPv4", 00:23:22.301 "traddr": "10.0.0.1", 00:23:22.301 "trsvcid": "58302" 00:23:22.301 }, 00:23:22.301 "auth": { 00:23:22.301 "state": "completed", 00:23:22.301 "digest": "sha512", 00:23:22.301 "dhgroup": "ffdhe6144" 00:23:22.301 } 00:23:22.301 } 00:23:22.301 ]' 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:22.301 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.561 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.561 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.561 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.561 01:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.500 01:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.070 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.070 { 00:23:24.070 "cntlid": 137, 00:23:24.070 "qid": 0, 00:23:24.070 "state": "enabled", 00:23:24.070 "listen_address": { 00:23:24.070 "trtype": "TCP", 00:23:24.070 "adrfam": "IPv4", 00:23:24.070 "traddr": "10.0.0.2", 00:23:24.070 "trsvcid": "4420" 00:23:24.070 }, 00:23:24.070 "peer_address": { 00:23:24.070 "trtype": "TCP", 00:23:24.070 "adrfam": "IPv4", 00:23:24.070 "traddr": "10.0.0.1", 00:23:24.070 "trsvcid": "58326" 00:23:24.070 }, 00:23:24.070 "auth": { 00:23:24.070 "state": "completed", 00:23:24.070 "digest": "sha512", 00:23:24.070 "dhgroup": "ffdhe8192" 00:23:24.070 } 00:23:24.070 } 00:23:24.070 ]' 00:23:24.070 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.330 01:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.270 01:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.841 00:23:25.841 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.841 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.841 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.100 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.101 { 00:23:26.101 "cntlid": 139, 00:23:26.101 "qid": 0, 00:23:26.101 "state": "enabled", 00:23:26.101 "listen_address": { 00:23:26.101 "trtype": "TCP", 00:23:26.101 "adrfam": "IPv4", 00:23:26.101 "traddr": "10.0.0.2", 00:23:26.101 "trsvcid": "4420" 00:23:26.101 }, 00:23:26.101 "peer_address": { 00:23:26.101 "trtype": "TCP", 00:23:26.101 "adrfam": "IPv4", 00:23:26.101 "traddr": "10.0.0.1", 00:23:26.101 "trsvcid": "58368" 00:23:26.101 }, 00:23:26.101 "auth": { 00:23:26.101 "state": "completed", 00:23:26.101 "digest": "sha512", 00:23:26.101 "dhgroup": "ffdhe8192" 00:23:26.101 } 00:23:26.101 } 00:23:26.101 ]' 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.101 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.361 01:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MmNlMTk4ZTZiZjFjNDZjNmZlMDI0ODk4ZjkxZTAyYjE3nWBj: --dhchap-ctrl-secret DHHC-1:02:ZjBlNmE0M2I3ZjRiNjNkM2U2OTE5MzM4N2U3NDRjYjQwYzA3MGFiMDJmNTdhMTYwXplV/g==: 00:23:26.930 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.930 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:26.930 01:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.930 01:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.190 01:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.191 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.191 01:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.765 00:23:27.765 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:27.765 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:27.765 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.082 { 00:23:28.082 "cntlid": 141, 00:23:28.082 "qid": 0, 00:23:28.082 "state": "enabled", 00:23:28.082 "listen_address": { 00:23:28.082 "trtype": "TCP", 00:23:28.082 "adrfam": "IPv4", 00:23:28.082 "traddr": "10.0.0.2", 00:23:28.082 "trsvcid": "4420" 00:23:28.082 }, 00:23:28.082 "peer_address": { 00:23:28.082 "trtype": "TCP", 00:23:28.082 "adrfam": "IPv4", 00:23:28.082 "traddr": "10.0.0.1", 00:23:28.082 "trsvcid": "34376" 00:23:28.082 }, 00:23:28.082 "auth": { 00:23:28.082 "state": "completed", 00:23:28.082 "digest": "sha512", 00:23:28.082 "dhgroup": "ffdhe8192" 00:23:28.082 } 00:23:28.082 } 00:23:28.082 ]' 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.082 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.342 01:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZjM1ODEzYzQ5NWNjYjk5N2RhYTQyODc5NWIxZDA3OWI2ZTk4ZGVkZjFmMmIzNmYyF9jyOg==: --dhchap-ctrl-secret DHHC-1:01:NWM0ZGUwMGRhMDEwMWNmNTc5ZGQ2MzI5OTBmNzYwZjU4iU64: 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:28.913 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.173 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.745 00:23:29.745 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:29.745 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:29.745 01:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:29.745 { 00:23:29.745 "cntlid": 143, 00:23:29.745 "qid": 0, 00:23:29.745 "state": "enabled", 00:23:29.745 "listen_address": { 00:23:29.745 "trtype": "TCP", 00:23:29.745 "adrfam": "IPv4", 00:23:29.745 "traddr": "10.0.0.2", 00:23:29.745 "trsvcid": "4420" 00:23:29.745 }, 00:23:29.745 "peer_address": { 00:23:29.745 "trtype": "TCP", 00:23:29.745 "adrfam": "IPv4", 00:23:29.745 "traddr": "10.0.0.1", 00:23:29.745 "trsvcid": "34388" 00:23:29.745 }, 00:23:29.745 "auth": { 00:23:29.745 "state": "completed", 00:23:29.745 "digest": "sha512", 00:23:29.745 "dhgroup": "ffdhe8192" 00:23:29.745 } 00:23:29.745 } 00:23:29.745 ]' 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.745 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.005 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.005 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.005 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.005 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.005 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.005 01:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.947 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.518 00:23:31.518 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.518 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.518 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:31.778 { 00:23:31.778 "cntlid": 145, 00:23:31.778 "qid": 0, 00:23:31.778 "state": "enabled", 00:23:31.778 "listen_address": { 00:23:31.778 "trtype": "TCP", 00:23:31.778 "adrfam": "IPv4", 00:23:31.778 "traddr": "10.0.0.2", 00:23:31.778 "trsvcid": "4420" 00:23:31.778 }, 00:23:31.778 "peer_address": { 00:23:31.778 "trtype": "TCP", 00:23:31.778 "adrfam": "IPv4", 00:23:31.778 "traddr": "10.0.0.1", 00:23:31.778 "trsvcid": "34410" 00:23:31.778 }, 00:23:31.778 "auth": { 00:23:31.778 "state": "completed", 00:23:31.778 "digest": "sha512", 00:23:31.778 "dhgroup": "ffdhe8192" 00:23:31.778 } 00:23:31.778 } 00:23:31.778 ]' 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.778 01:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:31.778 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:31.778 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:31.778 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.778 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.778 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.038 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:NzdkYmUzYmJhOTViMTEwODQxMGIzNjRiNGIzNTIxMmVjYzM1YTU3YzZmMDI2MWI2pmq2Cw==: --dhchap-ctrl-secret DHHC-1:03:YmZkNjlhN2Y0M2JlNzljN2QxY2UzZWEzZjlmZmM5Nzg5NzZmNGE1MDBkMDYxNDZkZjA4NzczODdmMzI0MjAwMgK02kc=: 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:32.609 01:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:33.180 request: 00:23:33.180 { 00:23:33.180 "name": "nvme0", 00:23:33.180 "trtype": "tcp", 00:23:33.180 "traddr": "10.0.0.2", 00:23:33.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:33.180 "adrfam": "ipv4", 00:23:33.180 "trsvcid": "4420", 00:23:33.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.180 "dhchap_key": "key2", 00:23:33.180 "method": "bdev_nvme_attach_controller", 00:23:33.180 "req_id": 1 00:23:33.180 } 00:23:33.180 Got JSON-RPC error response 00:23:33.180 response: 00:23:33.180 { 00:23:33.180 "code": -5, 00:23:33.180 "message": "Input/output error" 00:23:33.180 } 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.180 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.752 request: 00:23:33.752 { 00:23:33.752 "name": "nvme0", 00:23:33.752 "trtype": "tcp", 00:23:33.752 "traddr": "10.0.0.2", 00:23:33.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:33.752 "adrfam": "ipv4", 00:23:33.752 "trsvcid": "4420", 00:23:33.752 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.752 "dhchap_key": "key1", 00:23:33.752 "dhchap_ctrlr_key": "ckey2", 00:23:33.752 "method": "bdev_nvme_attach_controller", 00:23:33.752 "req_id": 1 00:23:33.752 } 00:23:33.752 Got JSON-RPC error response 00:23:33.752 response: 00:23:33.752 { 00:23:33.752 "code": -5, 00:23:33.752 "message": "Input/output error" 00:23:33.752 } 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.752 01:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.322 request: 00:23:34.322 { 00:23:34.322 "name": "nvme0", 00:23:34.322 "trtype": "tcp", 00:23:34.322 "traddr": "10.0.0.2", 00:23:34.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:34.322 "adrfam": "ipv4", 00:23:34.322 "trsvcid": "4420", 00:23:34.322 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:34.322 "dhchap_key": "key1", 00:23:34.322 "dhchap_ctrlr_key": "ckey1", 00:23:34.322 "method": "bdev_nvme_attach_controller", 00:23:34.322 "req_id": 1 00:23:34.322 } 00:23:34.322 Got JSON-RPC error response 00:23:34.322 response: 00:23:34.322 { 00:23:34.322 "code": -5, 00:23:34.322 "message": "Input/output error" 00:23:34.322 } 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4002002 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 4002002 ']' 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 4002002 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002002 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002002' 00:23:34.322 killing process with pid 4002002 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 4002002 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 4002002 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4027426 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4027426 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4027426 ']' 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.322 01:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4027426 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4027426 ']' 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.261 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.521 01:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:36.092 00:23:36.092 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:36.092 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:36.092 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:36.353 { 00:23:36.353 "cntlid": 1, 00:23:36.353 "qid": 0, 00:23:36.353 "state": "enabled", 00:23:36.353 "listen_address": { 00:23:36.353 "trtype": "TCP", 00:23:36.353 "adrfam": "IPv4", 00:23:36.353 "traddr": "10.0.0.2", 00:23:36.353 "trsvcid": "4420" 00:23:36.353 }, 00:23:36.353 "peer_address": { 00:23:36.353 "trtype": "TCP", 00:23:36.353 "adrfam": "IPv4", 00:23:36.353 "traddr": "10.0.0.1", 00:23:36.353 "trsvcid": "34460" 00:23:36.353 }, 00:23:36.353 "auth": { 00:23:36.353 "state": "completed", 00:23:36.353 "digest": "sha512", 00:23:36.353 "dhgroup": "ffdhe8192" 00:23:36.353 } 00:23:36.353 } 00:23:36.353 ]' 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.353 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.615 01:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ZDJkMDI5Y2YwZWUxZWI2Yjc0ODQ0ZWNiYjk3MDMzZDY1ZDI0NTM5MjY1MDEyYTFkNTZjOTBkNzhkMDI4ODc5Nx0ENzQ=: 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:37.184 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.444 request: 00:23:37.444 { 00:23:37.444 "name": "nvme0", 00:23:37.444 "trtype": "tcp", 00:23:37.444 "traddr": "10.0.0.2", 00:23:37.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:37.444 "adrfam": "ipv4", 00:23:37.444 "trsvcid": "4420", 00:23:37.444 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:37.444 "dhchap_key": "key3", 00:23:37.444 "method": "bdev_nvme_attach_controller", 00:23:37.444 "req_id": 1 00:23:37.444 } 00:23:37.444 Got JSON-RPC error response 00:23:37.444 response: 00:23:37.444 { 00:23:37.444 "code": -5, 00:23:37.444 "message": "Input/output error" 00:23:37.444 } 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:37.444 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.703 01:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.964 request: 00:23:37.964 { 00:23:37.964 "name": "nvme0", 00:23:37.964 "trtype": "tcp", 00:23:37.964 "traddr": "10.0.0.2", 00:23:37.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:37.964 "adrfam": "ipv4", 00:23:37.964 "trsvcid": "4420", 00:23:37.964 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:37.964 "dhchap_key": "key3", 00:23:37.964 "method": "bdev_nvme_attach_controller", 00:23:37.964 "req_id": 1 00:23:37.964 } 00:23:37.964 Got JSON-RPC error response 00:23:37.964 response: 00:23:37.964 { 00:23:37.964 "code": -5, 00:23:37.964 "message": "Input/output error" 00:23:37.964 } 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:37.964 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.224 request: 00:23:38.224 { 00:23:38.224 "name": "nvme0", 00:23:38.224 "trtype": "tcp", 00:23:38.224 "traddr": "10.0.0.2", 00:23:38.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:38.224 "adrfam": "ipv4", 00:23:38.224 "trsvcid": "4420", 00:23:38.224 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:38.224 "dhchap_key": "key0", 00:23:38.224 "dhchap_ctrlr_key": "key1", 00:23:38.224 "method": "bdev_nvme_attach_controller", 00:23:38.224 "req_id": 1 00:23:38.224 } 00:23:38.224 Got JSON-RPC error response 00:23:38.224 response: 00:23:38.224 { 00:23:38.224 "code": -5, 00:23:38.224 "message": "Input/output error" 00:23:38.224 } 00:23:38.224 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:38.224 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.224 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.224 01:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.224 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:38.224 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:38.483 00:23:38.484 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:38.484 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:38.484 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.769 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.769 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.769 01:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4002348 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 4002348 ']' 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 4002348 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002348 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002348' 00:23:38.769 killing process with pid 4002348 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 4002348 00:23:38.769 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 4002348 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.120 rmmod nvme_tcp 00:23:39.120 rmmod nvme_fabrics 00:23:39.120 rmmod nvme_keyring 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4027426 ']' 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4027426 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 4027426 ']' 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 4027426 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4027426 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4027426' 00:23:39.120 killing process with pid 4027426 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 4027426 00:23:39.120 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 4027426 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.400 01:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.309 01:42:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.309 01:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gU1 /tmp/spdk.key-sha256.yfH /tmp/spdk.key-sha384.xwU /tmp/spdk.key-sha512.PDa /tmp/spdk.key-sha512.Qtq /tmp/spdk.key-sha384.F7t /tmp/spdk.key-sha256.NJJ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:41.309 00:23:41.309 real 2m21.299s 00:23:41.309 user 5m12.223s 00:23:41.309 sys 0m19.892s 00:23:41.309 01:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:41.309 01:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.309 ************************************ 00:23:41.309 END TEST nvmf_auth_target 00:23:41.309 ************************************ 00:23:41.309 01:42:07 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:41.309 01:42:07 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:41.309 01:42:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:41.309 01:42:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:41.309 01:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.570 ************************************ 00:23:41.570 START TEST nvmf_bdevio_no_huge 00:23:41.570 ************************************ 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:41.570 * Looking for test storage... 00:23:41.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.570 01:42:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.708 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:49.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:49.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:49.709 Found net devices under 0000:31:00.0: cvl_0_0 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:49.709 Found net devices under 0000:31:00.1: cvl_0_1 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.709 01:42:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.709 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.709 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:23:49.970 00:23:49.970 --- 10.0.0.2 ping statistics --- 00:23:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.970 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:23:49.970 00:23:49.970 --- 10.0.0.1 ping statistics --- 00:23:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.970 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4033673 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4033673 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 4033673 ']' 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:49.970 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:49.970 [2024-07-12 01:42:16.163056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:49.970 [2024-07-12 01:42:16.163109] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:49.970 [2024-07-12 01:42:16.253788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.970 [2024-07-12 01:42:16.324507] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.970 [2024-07-12 01:42:16.324555] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.970 [2024-07-12 01:42:16.324564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.970 [2024-07-12 01:42:16.324571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.970 [2024-07-12 01:42:16.324577] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.970 [2024-07-12 01:42:16.324732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:49.970 [2024-07-12 01:42:16.324872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:49.970 [2024-07-12 01:42:16.325030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.970 [2024-07-12 01:42:16.325031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.913 01:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.913 [2024-07-12 01:42:17.004062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.913 Malloc0 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.913 [2024-07-12 01:42:17.058246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.913 { 00:23:50.913 "params": { 00:23:50.913 "name": "Nvme$subsystem", 00:23:50.913 "trtype": "$TEST_TRANSPORT", 00:23:50.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.913 "adrfam": "ipv4", 00:23:50.913 "trsvcid": "$NVMF_PORT", 00:23:50.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.913 "hdgst": ${hdgst:-false}, 00:23:50.913 "ddgst": ${ddgst:-false} 00:23:50.913 }, 00:23:50.913 "method": "bdev_nvme_attach_controller" 00:23:50.913 } 00:23:50.913 EOF 00:23:50.913 )") 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:50.913 01:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:50.913 "params": { 00:23:50.913 "name": "Nvme1", 00:23:50.913 "trtype": "tcp", 00:23:50.913 "traddr": "10.0.0.2", 00:23:50.913 "adrfam": "ipv4", 00:23:50.913 "trsvcid": "4420", 00:23:50.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.913 "hdgst": false, 00:23:50.913 "ddgst": false 00:23:50.913 }, 00:23:50.913 "method": "bdev_nvme_attach_controller" 00:23:50.913 }' 00:23:50.913 [2024-07-12 01:42:17.114900] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:50.913 [2024-07-12 01:42:17.114970] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4033861 ] 00:23:50.913 [2024-07-12 01:42:17.187774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.913 [2024-07-12 01:42:17.258276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.913 [2024-07-12 01:42:17.258340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.913 [2024-07-12 01:42:17.258343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.174 I/O targets: 00:23:51.174 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:51.174 00:23:51.174 00:23:51.174 CUnit - A unit testing framework for C - Version 2.1-3 00:23:51.174 http://cunit.sourceforge.net/ 00:23:51.174 00:23:51.174 00:23:51.174 Suite: bdevio tests on: Nvme1n1 00:23:51.174 Test: blockdev write read block ...passed 00:23:51.174 Test: blockdev write zeroes read block ...passed 00:23:51.434 Test: blockdev write zeroes read no split ...passed 00:23:51.434 Test: blockdev write zeroes read split ...passed 00:23:51.434 Test: blockdev write zeroes read split partial ...passed 00:23:51.434 Test: blockdev reset ...[2024-07-12 01:42:17.564195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:51.434 [2024-07-12 01:42:17.564256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b1510 (9): Bad file descriptor 00:23:51.434 [2024-07-12 01:42:17.577102] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:51.434 passed 00:23:51.434 Test: blockdev write read 8 blocks ...passed 00:23:51.434 Test: blockdev write read size > 128k ...passed 00:23:51.434 Test: blockdev write read invalid size ...passed 00:23:51.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:51.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:51.434 Test: blockdev write read max offset ...passed 00:23:51.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:51.434 Test: blockdev writev readv 8 blocks ...passed 00:23:51.434 Test: blockdev writev readv 30 x 1block ...passed 00:23:51.694 Test: blockdev writev readv block ...passed 00:23:51.694 Test: blockdev writev readv size > 128k ...passed 00:23:51.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:51.694 Test: blockdev comparev and writev ...[2024-07-12 01:42:17.799491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.799515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.799526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.800007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.800016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.800025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.800505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.800514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.800520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.801006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.801014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.801023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.694 [2024-07-12 01:42:17.801028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:51.694 passed 00:23:51.694 Test: blockdev nvme passthru rw ...passed 00:23:51.694 Test: blockdev nvme passthru vendor specific ...[2024-07-12 01:42:17.885097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.694 [2024-07-12 01:42:17.885107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.885383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.694 [2024-07-12 01:42:17.885390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.885692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.694 [2024-07-12 01:42:17.885699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:51.694 [2024-07-12 01:42:17.885968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.694 [2024-07-12 01:42:17.885975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:51.694 passed 00:23:51.694 Test: blockdev nvme admin passthru ...passed 00:23:51.694 Test: blockdev copy ...passed 00:23:51.694 00:23:51.694 Run Summary: Type Total Ran Passed Failed Inactive 00:23:51.694 suites 1 1 n/a 0 0 00:23:51.694 tests 23 23 23 0 0 00:23:51.694 asserts 152 152 152 0 n/a 00:23:51.694 00:23:51.694 Elapsed time = 1.031 seconds 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.955 rmmod nvme_tcp 00:23:51.955 rmmod nvme_fabrics 00:23:51.955 rmmod nvme_keyring 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4033673 ']' 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4033673 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 4033673 ']' 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 4033673 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.955 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4033673 00:23:52.215 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:23:52.215 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:23:52.215 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4033673' 00:23:52.215 killing process with pid 4033673 00:23:52.215 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 4033673 00:23:52.215 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 4033673 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.476 01:42:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.020 01:42:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.020 00:23:55.020 real 0m13.061s 00:23:55.020 user 0m13.133s 00:23:55.020 sys 0m7.177s 00:23:55.020 01:42:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:55.020 01:42:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:55.020 ************************************ 00:23:55.020 END TEST nvmf_bdevio_no_huge 00:23:55.020 ************************************ 00:23:55.020 01:42:20 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:55.020 01:42:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:55.020 01:42:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:55.020 01:42:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.020 ************************************ 00:23:55.020 START TEST nvmf_tls 00:23:55.020 ************************************ 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:55.020 * Looking for test storage... 00:23:55.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.020 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.021 01:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:03.158 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:03.158 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:03.158 Found net devices under 0000:31:00.0: cvl_0_0 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:03.158 Found net devices under 0000:31:00.1: cvl_0_1 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.158 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.159 01:42:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:24:03.159 00:24:03.159 --- 10.0.0.2 ping statistics --- 00:24:03.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.159 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:24:03.159 00:24:03.159 --- 10.0.0.1 ping statistics --- 00:24:03.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.159 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4038730 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4038730 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4038730 ']' 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.159 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.159 [2024-07-12 01:42:29.212996] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:03.159 [2024-07-12 01:42:29.213058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.159 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.159 [2024-07-12 01:42:29.313430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.159 [2024-07-12 01:42:29.360115] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.159 [2024-07-12 01:42:29.360173] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.159 [2024-07-12 01:42:29.360181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.159 [2024-07-12 01:42:29.360188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.159 [2024-07-12 01:42:29.360194] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.159 [2024-07-12 01:42:29.360228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.732 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.732 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:03.732 01:42:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.732 01:42:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.732 01:42:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.732 01:42:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.732 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:03.732 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:03.992 true 00:24:03.992 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:03.992 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:24:04.253 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:24:04.254 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:04.254 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:04.254 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.254 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:24:04.515 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:24:04.515 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:04.515 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:04.775 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.775 01:42:30 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:24:04.776 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:24:04.776 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:04.776 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.776 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:05.036 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:24:05.036 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:05.036 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:05.036 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.036 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:05.296 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:24:05.296 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:05.296 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.pYpTKAcbrx 00:24:05.557 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:05.818 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.H6UjwbIqEu 00:24:05.818 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.818 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:05.818 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.pYpTKAcbrx 00:24:05.818 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.H6UjwbIqEu 00:24:05.818 01:42:31 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:05.818 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:06.080 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.pYpTKAcbrx 00:24:06.080 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pYpTKAcbrx 00:24:06.080 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.080 [2024-07-12 01:42:32.422798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.340 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:06.340 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:06.600 [2024-07-12 01:42:32.715511] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:06.600 [2024-07-12 01:42:32.715694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.600 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:06.600 malloc0 00:24:06.600 01:42:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:06.868 01:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pYpTKAcbrx 00:24:06.868 [2024-07-12 01:42:33.130497] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:06.868 01:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pYpTKAcbrx 00:24:06.868 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.092 Initializing NVMe Controllers 00:24:19.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.092 Initialization complete. Launching workers. 00:24:19.092 ======================================================== 00:24:19.092 Latency(us) 00:24:19.092 Device Information : IOPS MiB/s Average min max 00:24:19.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19095.75 74.59 3351.52 1022.22 3954.76 00:24:19.093 ======================================================== 00:24:19.093 Total : 19095.75 74.59 3351.52 1022.22 3954.76 00:24:19.093 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pYpTKAcbrx 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pYpTKAcbrx' 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4041467 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4041467 /var/tmp/bdevperf.sock 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4041467 ']' 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.093 [2024-07-12 01:42:43.291855] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:19.093 [2024-07-12 01:42:43.291911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041467 ] 00:24:19.093 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.093 [2024-07-12 01:42:43.347915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.093 [2024-07-12 01:42:43.376145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pYpTKAcbrx 00:24:19.093 [2024-07-12 01:42:43.582372] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.093 [2024-07-12 01:42:43.582425] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:19.093 TLSTESTn1 00:24:19.093 01:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:19.093 Running I/O for 10 seconds... 00:24:29.090 00:24:29.090 Latency(us) 00:24:29.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.090 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:29.090 Verification LBA range: start 0x0 length 0x2000 00:24:29.090 TLSTESTn1 : 10.01 5738.36 22.42 0.00 0.00 22274.40 4560.21 69905.07 00:24:29.090 =================================================================================================================== 00:24:29.090 Total : 5738.36 22.42 0.00 0.00 22274.40 4560.21 69905.07 00:24:29.090 0 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4041467 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4041467 ']' 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4041467 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4041467 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4041467' 00:24:29.090 killing process with pid 4041467 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4041467 00:24:29.090 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.090 00:24:29.090 Latency(us) 00:24:29.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.090 =================================================================================================================== 00:24:29.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.090 [2024-07-12 01:42:53.874022] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4041467 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6UjwbIqEu 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6UjwbIqEu 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H6UjwbIqEu 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H6UjwbIqEu' 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4043600 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4043600 /var/tmp/bdevperf.sock 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4043600 ']' 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:29.090 01:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.090 [2024-07-12 01:42:54.029880] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:29.090 [2024-07-12 01:42:54.029937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043600 ] 00:24:29.090 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.090 [2024-07-12 01:42:54.085437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.090 [2024-07-12 01:42:54.113356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H6UjwbIqEu 00:24:29.090 [2024-07-12 01:42:54.319645] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.090 [2024-07-12 01:42:54.319697] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:29.090 [2024-07-12 01:42:54.329226] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:29.090 [2024-07-12 01:42:54.329774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5380 (107): Transport endpoint is not connected 00:24:29.090 [2024-07-12 01:42:54.330769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc5380 (9): Bad file descriptor 00:24:29.090 [2024-07-12 01:42:54.331771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.090 [2024-07-12 01:42:54.331779] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:29.090 [2024-07-12 01:42:54.331785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.090 request: 00:24:29.090 { 00:24:29.090 "name": "TLSTEST", 00:24:29.090 "trtype": "tcp", 00:24:29.090 "traddr": "10.0.0.2", 00:24:29.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.090 "adrfam": "ipv4", 00:24:29.090 "trsvcid": "4420", 00:24:29.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.090 "psk": "/tmp/tmp.H6UjwbIqEu", 00:24:29.090 "method": "bdev_nvme_attach_controller", 00:24:29.090 "req_id": 1 00:24:29.090 } 00:24:29.090 Got JSON-RPC error response 00:24:29.090 response: 00:24:29.090 { 00:24:29.090 "code": -5, 00:24:29.090 "message": "Input/output error" 00:24:29.090 } 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4043600 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4043600 ']' 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4043600 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4043600 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4043600' 00:24:29.090 killing process with pid 4043600 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4043600 00:24:29.090 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.090 00:24:29.090 Latency(us) 00:24:29.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.090 =================================================================================================================== 00:24:29.090 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.090 [2024-07-12 01:42:54.408256] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4043600 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pYpTKAcbrx 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pYpTKAcbrx 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pYpTKAcbrx 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:29.090 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pYpTKAcbrx' 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4043799 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4043799 /var/tmp/bdevperf.sock 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4043799 ']' 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.091 [2024-07-12 01:42:54.557322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:29.091 [2024-07-12 01:42:54.557378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043799 ] 00:24:29.091 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.091 [2024-07-12 01:42:54.612127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.091 [2024-07-12 01:42:54.639198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.pYpTKAcbrx 00:24:29.091 [2024-07-12 01:42:54.849572] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.091 [2024-07-12 01:42:54.849625] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:29.091 [2024-07-12 01:42:54.860422] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:29.091 [2024-07-12 01:42:54.860447] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:29.091 [2024-07-12 01:42:54.860466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:29.091 [2024-07-12 01:42:54.860731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2405380 (107): Transport endpoint is not connected 00:24:29.091 [2024-07-12 01:42:54.861727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2405380 (9): Bad file descriptor 00:24:29.091 [2024-07-12 01:42:54.862729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.091 [2024-07-12 01:42:54.862736] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:29.091 [2024-07-12 01:42:54.862743] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.091 request: 00:24:29.091 { 00:24:29.091 "name": "TLSTEST", 00:24:29.091 "trtype": "tcp", 00:24:29.091 "traddr": "10.0.0.2", 00:24:29.091 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:29.091 "adrfam": "ipv4", 00:24:29.091 "trsvcid": "4420", 00:24:29.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.091 "psk": "/tmp/tmp.pYpTKAcbrx", 00:24:29.091 "method": "bdev_nvme_attach_controller", 00:24:29.091 "req_id": 1 00:24:29.091 } 00:24:29.091 Got JSON-RPC error response 00:24:29.091 response: 00:24:29.091 { 00:24:29.091 "code": -5, 00:24:29.091 "message": "Input/output error" 00:24:29.091 } 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4043799 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4043799 ']' 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4043799 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4043799 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4043799' 00:24:29.091 killing process with pid 4043799 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4043799 00:24:29.091 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.091 00:24:29.091 Latency(us) 00:24:29.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.091 =================================================================================================================== 00:24:29.091 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.091 [2024-07-12 01:42:54.949662] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:29.091 01:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4043799 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pYpTKAcbrx 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pYpTKAcbrx 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pYpTKAcbrx 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pYpTKAcbrx' 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4043808 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4043808 /var/tmp/bdevperf.sock 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4043808 ']' 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.091 [2024-07-12 01:42:55.096182] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:29.091 [2024-07-12 01:42:55.096241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043808 ] 00:24:29.091 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.091 [2024-07-12 01:42:55.152224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.091 [2024-07-12 01:42:55.178126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:29.091 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pYpTKAcbrx 00:24:29.091 [2024-07-12 01:42:55.392418] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.091 [2024-07-12 01:42:55.392479] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:29.091 [2024-07-12 01:42:55.401750] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:29.091 [2024-07-12 01:42:55.401769] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:29.091 [2024-07-12 01:42:55.401788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:29.091 [2024-07-12 01:42:55.402578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc7380 (107): Transport endpoint is not connected 00:24:29.091 [2024-07-12 01:42:55.403573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc7380 (9): Bad file descriptor 00:24:29.091 [2024-07-12 01:42:55.404574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:29.091 [2024-07-12 01:42:55.404582] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:29.091 [2024-07-12 01:42:55.404589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:29.091 request: 00:24:29.091 { 00:24:29.091 "name": "TLSTEST", 00:24:29.091 "trtype": "tcp", 00:24:29.091 "traddr": "10.0.0.2", 00:24:29.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.091 "adrfam": "ipv4", 00:24:29.091 "trsvcid": "4420", 00:24:29.091 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:29.091 "psk": "/tmp/tmp.pYpTKAcbrx", 00:24:29.091 "method": "bdev_nvme_attach_controller", 00:24:29.091 "req_id": 1 00:24:29.091 } 00:24:29.091 Got JSON-RPC error response 00:24:29.091 response: 00:24:29.092 { 00:24:29.092 "code": -5, 00:24:29.092 "message": "Input/output error" 00:24:29.092 } 00:24:29.092 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4043808 00:24:29.092 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4043808 ']' 00:24:29.092 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4043808 00:24:29.092 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:29.092 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4043808 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4043808' 00:24:29.353 killing process with pid 4043808 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4043808 00:24:29.353 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.353 00:24:29.353 Latency(us) 00:24:29.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.353 =================================================================================================================== 00:24:29.353 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.353 [2024-07-12 01:42:55.493846] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4043808 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4043910 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4043910 /var/tmp/bdevperf.sock 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4043910 ']' 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:29.353 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.353 [2024-07-12 01:42:55.624517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:29.353 [2024-07-12 01:42:55.624575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043910 ] 00:24:29.353 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.353 [2024-07-12 01:42:55.674822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.353 [2024-07-12 01:42:55.702235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.614 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:29.614 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:29.615 [2024-07-12 01:42:55.923043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:29.615 [2024-07-12 01:42:55.925042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc00990 (9): Bad file descriptor 00:24:29.615 [2024-07-12 01:42:55.926042] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.615 [2024-07-12 01:42:55.926050] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:29.615 [2024-07-12 01:42:55.926057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.615 request: 00:24:29.615 { 00:24:29.615 "name": "TLSTEST", 00:24:29.615 "trtype": "tcp", 00:24:29.615 "traddr": "10.0.0.2", 00:24:29.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.615 "adrfam": "ipv4", 00:24:29.615 "trsvcid": "4420", 00:24:29.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.615 "method": "bdev_nvme_attach_controller", 00:24:29.615 "req_id": 1 00:24:29.615 } 00:24:29.615 Got JSON-RPC error response 00:24:29.615 response: 00:24:29.615 { 00:24:29.615 "code": -5, 00:24:29.615 "message": "Input/output error" 00:24:29.615 } 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4043910 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4043910 ']' 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4043910 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.615 01:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4043910 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4043910' 00:24:29.876 killing process with pid 4043910 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4043910 00:24:29.876 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.876 00:24:29.876 Latency(us) 00:24:29.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.876 =================================================================================================================== 00:24:29.876 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4043910 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4038730 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4038730 ']' 00:24:29.876 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4038730 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4038730 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4038730' 00:24:29.877 killing process with pid 4038730 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4038730 00:24:29.877 [2024-07-12 01:42:56.161512] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:29.877 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4038730 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.0XcKU4U01A 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.0XcKU4U01A 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4044166 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4044166 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4044166 ']' 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:30.139 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.139 [2024-07-12 01:42:56.386110] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:30.139 [2024-07-12 01:42:56.386162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.139 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.139 [2024-07-12 01:42:56.442644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.139 [2024-07-12 01:42:56.471147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.139 [2024-07-12 01:42:56.471180] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.139 [2024-07-12 01:42:56.471186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.139 [2024-07-12 01:42:56.471192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.139 [2024-07-12 01:42:56.471196] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.139 [2024-07-12 01:42:56.471211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.0XcKU4U01A 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0XcKU4U01A 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:30.401 [2024-07-12 01:42:56.714649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.401 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:30.662 01:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:30.662 [2024-07-12 01:42:57.007362] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:30.662 [2024-07-12 01:42:57.007520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.922 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:30.922 malloc0 00:24:30.922 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:24:31.183 [2024-07-12 01:42:57.454251] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XcKU4U01A 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0XcKU4U01A' 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4044376 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4044376 /var/tmp/bdevperf.sock 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4044376 ']' 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:31.183 01:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.444 [2024-07-12 01:42:57.543043] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:31.444 [2024-07-12 01:42:57.543107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044376 ] 00:24:31.444 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.444 [2024-07-12 01:42:57.601503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.444 [2024-07-12 01:42:57.629653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.124 01:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:32.124 01:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:32.124 01:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:24:32.385 [2024-07-12 01:42:58.453608] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.385 [2024-07-12 01:42:58.453665] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:32.385 TLSTESTn1 00:24:32.385 01:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:32.385 Running I/O for 10 seconds... 00:24:42.380 00:24:42.380 Latency(us) 00:24:42.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.380 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:42.380 Verification LBA range: start 0x0 length 0x2000 00:24:42.380 TLSTESTn1 : 10.02 3476.05 13.58 0.00 0.00 36769.51 4505.60 86070.61 00:24:42.380 =================================================================================================================== 00:24:42.380 Total : 3476.05 13.58 0.00 0.00 36769.51 4505.60 86070.61 00:24:42.380 0 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4044376 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4044376 ']' 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4044376 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:42.380 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4044376 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4044376' 00:24:42.641 killing process with pid 4044376 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4044376 00:24:42.641 Received shutdown signal, test time was about 10.000000 seconds 00:24:42.641 00:24:42.641 Latency(us) 00:24:42.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.641 =================================================================================================================== 00:24:42.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.641 [2024-07-12 01:43:08.751613] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4044376 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.0XcKU4U01A 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XcKU4U01A 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XcKU4U01A 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XcKU4U01A 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0XcKU4U01A' 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4046546 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4046546 /var/tmp/bdevperf.sock 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4046546 ']' 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:42.641 01:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.641 [2024-07-12 01:43:08.912903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:42.641 [2024-07-12 01:43:08.912955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046546 ] 00:24:42.641 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.641 [2024-07-12 01:43:08.969314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.641 [2024-07-12 01:43:08.995037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:24:42.901 [2024-07-12 01:43:09.209376] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:42.901 [2024-07-12 01:43:09.209420] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:42.901 [2024-07-12 01:43:09.209425] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.0XcKU4U01A 00:24:42.901 request: 00:24:42.901 { 00:24:42.901 "name": "TLSTEST", 00:24:42.901 "trtype": "tcp", 00:24:42.901 "traddr": "10.0.0.2", 00:24:42.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.901 "adrfam": "ipv4", 00:24:42.901 "trsvcid": "4420", 00:24:42.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.901 "psk": "/tmp/tmp.0XcKU4U01A", 00:24:42.901 "method": "bdev_nvme_attach_controller", 00:24:42.901 "req_id": 1 00:24:42.901 } 00:24:42.901 Got JSON-RPC error response 00:24:42.901 response: 00:24:42.901 { 00:24:42.901 "code": -1, 00:24:42.901 "message": "Operation not permitted" 00:24:42.901 } 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4046546 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4046546 ']' 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4046546 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:42.901 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4046546 00:24:43.161 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:43.161 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:43.161 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4046546' 00:24:43.161 killing process with pid 4046546 00:24:43.161 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4046546 00:24:43.161 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.161 00:24:43.161 Latency(us) 00:24:43.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.161 =================================================================================================================== 00:24:43.162 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4046546 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4044166 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4044166 ']' 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4044166 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4044166 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4044166' 00:24:43.162 killing process with pid 4044166 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4044166 00:24:43.162 [2024-07-12 01:43:09.444164] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:43.162 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4044166 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4046658 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4046658 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4046658 ']' 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:43.422 01:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.422 [2024-07-12 01:43:09.615103] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:43.422 [2024-07-12 01:43:09.615158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.422 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.422 [2024-07-12 01:43:09.701343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.422 [2024-07-12 01:43:09.728771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.422 [2024-07-12 01:43:09.728804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.422 [2024-07-12 01:43:09.728810] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.422 [2024-07-12 01:43:09.728815] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.422 [2024-07-12 01:43:09.728819] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.422 [2024-07-12 01:43:09.728833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.0XcKU4U01A 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0XcKU4U01A 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.0XcKU4U01A 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0XcKU4U01A 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:44.362 [2024-07-12 01:43:10.553756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.362 01:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:44.622 01:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:44.622 [2024-07-12 01:43:10.846461] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.622 [2024-07-12 01:43:10.846630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.622 01:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:44.881 malloc0 00:24:44.881 01:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:44.881 01:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:24:45.140 [2024-07-12 01:43:11.293496] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:45.140 [2024-07-12 01:43:11.293516] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:45.140 [2024-07-12 01:43:11.293536] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:45.140 request: 00:24:45.140 { 00:24:45.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.140 "host": "nqn.2016-06.io.spdk:host1", 00:24:45.140 "psk": "/tmp/tmp.0XcKU4U01A", 00:24:45.140 "method": "nvmf_subsystem_add_host", 00:24:45.140 "req_id": 1 00:24:45.140 } 00:24:45.140 Got JSON-RPC error response 00:24:45.140 response: 00:24:45.140 { 00:24:45.140 "code": -32603, 00:24:45.140 "message": "Internal error" 00:24:45.140 } 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4046658 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4046658 ']' 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4046658 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4046658 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4046658' 00:24:45.140 killing process with pid 4046658 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4046658 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4046658 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.0XcKU4U01A 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4047112 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4047112 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4047112 ']' 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:45.140 01:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.400 [2024-07-12 01:43:11.547788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:45.400 [2024-07-12 01:43:11.547862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.400 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.400 [2024-07-12 01:43:11.637563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.400 [2024-07-12 01:43:11.667173] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.400 [2024-07-12 01:43:11.667216] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.400 [2024-07-12 01:43:11.667222] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.400 [2024-07-12 01:43:11.667227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.400 [2024-07-12 01:43:11.667237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.400 [2024-07-12 01:43:11.667254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.971 01:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:45.971 01:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:45.971 01:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.971 01:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.971 01:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.231 01:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.231 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.0XcKU4U01A 00:24:46.231 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0XcKU4U01A 00:24:46.231 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:46.231 [2024-07-12 01:43:12.477874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.231 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:46.491 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:46.491 [2024-07-12 01:43:12.770580] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.491 [2024-07-12 01:43:12.770741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.491 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:46.751 malloc0 00:24:46.751 01:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:46.751 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:24:47.011 [2024-07-12 01:43:13.217455] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4047483 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4047483 /var/tmp/bdevperf.sock 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4047483 ']' 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:47.011 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.011 [2024-07-12 01:43:13.263039] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:47.011 [2024-07-12 01:43:13.263089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047483 ] 00:24:47.011 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.011 [2024-07-12 01:43:13.319611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.011 [2024-07-12 01:43:13.347791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.271 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:47.271 01:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:47.271 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:24:47.271 [2024-07-12 01:43:13.562253] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.271 [2024-07-12 01:43:13.562316] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:47.531 TLSTESTn1 00:24:47.531 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:47.791 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:47.791 "subsystems": [ 00:24:47.791 { 00:24:47.791 "subsystem": "keyring", 00:24:47.791 "config": [] 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "subsystem": "iobuf", 00:24:47.791 "config": [ 00:24:47.791 { 00:24:47.791 "method": "iobuf_set_options", 00:24:47.791 "params": { 00:24:47.791 "small_pool_count": 8192, 00:24:47.791 "large_pool_count": 1024, 00:24:47.791 "small_bufsize": 8192, 00:24:47.791 "large_bufsize": 135168 00:24:47.791 } 00:24:47.791 } 00:24:47.791 ] 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "subsystem": "sock", 00:24:47.791 "config": [ 00:24:47.791 { 00:24:47.791 "method": "sock_set_default_impl", 00:24:47.791 "params": { 00:24:47.791 "impl_name": "posix" 00:24:47.791 } 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "method": "sock_impl_set_options", 00:24:47.791 "params": { 00:24:47.791 "impl_name": "ssl", 00:24:47.791 "recv_buf_size": 4096, 00:24:47.791 "send_buf_size": 4096, 00:24:47.791 "enable_recv_pipe": true, 00:24:47.791 "enable_quickack": false, 00:24:47.791 "enable_placement_id": 0, 00:24:47.791 "enable_zerocopy_send_server": true, 00:24:47.791 "enable_zerocopy_send_client": false, 00:24:47.791 "zerocopy_threshold": 0, 00:24:47.791 "tls_version": 0, 00:24:47.791 "enable_ktls": false 00:24:47.791 } 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "method": "sock_impl_set_options", 00:24:47.791 "params": { 00:24:47.791 "impl_name": "posix", 00:24:47.791 "recv_buf_size": 2097152, 00:24:47.791 "send_buf_size": 2097152, 00:24:47.791 "enable_recv_pipe": true, 00:24:47.791 "enable_quickack": false, 00:24:47.791 "enable_placement_id": 0, 00:24:47.791 "enable_zerocopy_send_server": true, 00:24:47.791 "enable_zerocopy_send_client": false, 00:24:47.791 "zerocopy_threshold": 0, 00:24:47.791 "tls_version": 0, 00:24:47.791 "enable_ktls": false 00:24:47.791 } 00:24:47.791 } 00:24:47.791 ] 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "subsystem": "vmd", 00:24:47.791 "config": [] 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "subsystem": "accel", 00:24:47.791 "config": [ 00:24:47.791 { 00:24:47.791 "method": "accel_set_options", 00:24:47.791 "params": { 00:24:47.791 "small_cache_size": 128, 00:24:47.791 "large_cache_size": 16, 00:24:47.791 "task_count": 2048, 00:24:47.791 "sequence_count": 2048, 00:24:47.791 "buf_count": 2048 00:24:47.791 } 00:24:47.791 } 00:24:47.791 ] 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "subsystem": "bdev", 00:24:47.791 "config": [ 00:24:47.791 { 00:24:47.791 "method": "bdev_set_options", 00:24:47.791 "params": { 00:24:47.791 "bdev_io_pool_size": 65535, 00:24:47.791 "bdev_io_cache_size": 256, 00:24:47.791 "bdev_auto_examine": true, 00:24:47.791 "iobuf_small_cache_size": 128, 00:24:47.791 "iobuf_large_cache_size": 16 00:24:47.791 } 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "method": "bdev_raid_set_options", 00:24:47.791 "params": { 00:24:47.791 "process_window_size_kb": 1024 00:24:47.791 } 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "method": "bdev_iscsi_set_options", 00:24:47.791 "params": { 00:24:47.791 "timeout_sec": 30 00:24:47.791 } 00:24:47.791 }, 00:24:47.791 { 00:24:47.791 "method": "bdev_nvme_set_options", 00:24:47.791 "params": { 00:24:47.791 "action_on_timeout": "none", 00:24:47.791 "timeout_us": 0, 00:24:47.791 "timeout_admin_us": 0, 00:24:47.791 "keep_alive_timeout_ms": 10000, 00:24:47.791 "arbitration_burst": 0, 00:24:47.791 "low_priority_weight": 0, 00:24:47.792 "medium_priority_weight": 0, 00:24:47.792 "high_priority_weight": 0, 00:24:47.792 "nvme_adminq_poll_period_us": 10000, 00:24:47.792 "nvme_ioq_poll_period_us": 0, 00:24:47.792 "io_queue_requests": 0, 00:24:47.792 "delay_cmd_submit": true, 00:24:47.792 "transport_retry_count": 4, 00:24:47.792 "bdev_retry_count": 3, 00:24:47.792 "transport_ack_timeout": 0, 00:24:47.792 "ctrlr_loss_timeout_sec": 0, 00:24:47.792 "reconnect_delay_sec": 0, 00:24:47.792 "fast_io_fail_timeout_sec": 0, 00:24:47.792 "disable_auto_failback": false, 00:24:47.792 "generate_uuids": false, 00:24:47.792 "transport_tos": 0, 00:24:47.792 "nvme_error_stat": false, 00:24:47.792 "rdma_srq_size": 0, 00:24:47.792 "io_path_stat": false, 00:24:47.792 "allow_accel_sequence": false, 00:24:47.792 "rdma_max_cq_size": 0, 00:24:47.792 "rdma_cm_event_timeout_ms": 0, 00:24:47.792 "dhchap_digests": [ 00:24:47.792 "sha256", 00:24:47.792 "sha384", 00:24:47.792 "sha512" 00:24:47.792 ], 00:24:47.792 "dhchap_dhgroups": [ 00:24:47.792 "null", 00:24:47.792 "ffdhe2048", 00:24:47.792 "ffdhe3072", 00:24:47.792 "ffdhe4096", 00:24:47.792 "ffdhe6144", 00:24:47.792 "ffdhe8192" 00:24:47.792 ] 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "bdev_nvme_set_hotplug", 00:24:47.792 "params": { 00:24:47.792 "period_us": 100000, 00:24:47.792 "enable": false 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "bdev_malloc_create", 00:24:47.792 "params": { 00:24:47.792 "name": "malloc0", 00:24:47.792 "num_blocks": 8192, 00:24:47.792 "block_size": 4096, 00:24:47.792 "physical_block_size": 4096, 00:24:47.792 "uuid": "5e23b6ca-5043-422b-824a-c751f72f2740", 00:24:47.792 "optimal_io_boundary": 0 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "bdev_wait_for_examine" 00:24:47.792 } 00:24:47.792 ] 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "subsystem": "nbd", 00:24:47.792 "config": [] 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "subsystem": "scheduler", 00:24:47.792 "config": [ 00:24:47.792 { 00:24:47.792 "method": "framework_set_scheduler", 00:24:47.792 "params": { 00:24:47.792 "name": "static" 00:24:47.792 } 00:24:47.792 } 00:24:47.792 ] 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "subsystem": "nvmf", 00:24:47.792 "config": [ 00:24:47.792 { 00:24:47.792 "method": "nvmf_set_config", 00:24:47.792 "params": { 00:24:47.792 "discovery_filter": "match_any", 00:24:47.792 "admin_cmd_passthru": { 00:24:47.792 "identify_ctrlr": false 00:24:47.792 } 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_set_max_subsystems", 00:24:47.792 "params": { 00:24:47.792 "max_subsystems": 1024 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_set_crdt", 00:24:47.792 "params": { 00:24:47.792 "crdt1": 0, 00:24:47.792 "crdt2": 0, 00:24:47.792 "crdt3": 0 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_create_transport", 00:24:47.792 "params": { 00:24:47.792 "trtype": "TCP", 00:24:47.792 "max_queue_depth": 128, 00:24:47.792 "max_io_qpairs_per_ctrlr": 127, 00:24:47.792 "in_capsule_data_size": 4096, 00:24:47.792 "max_io_size": 131072, 00:24:47.792 "io_unit_size": 131072, 00:24:47.792 "max_aq_depth": 128, 00:24:47.792 "num_shared_buffers": 511, 00:24:47.792 "buf_cache_size": 4294967295, 00:24:47.792 "dif_insert_or_strip": false, 00:24:47.792 "zcopy": false, 00:24:47.792 "c2h_success": false, 00:24:47.792 "sock_priority": 0, 00:24:47.792 "abort_timeout_sec": 1, 00:24:47.792 "ack_timeout": 0, 00:24:47.792 "data_wr_pool_size": 0 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_create_subsystem", 00:24:47.792 "params": { 00:24:47.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.792 "allow_any_host": false, 00:24:47.792 "serial_number": "SPDK00000000000001", 00:24:47.792 "model_number": "SPDK bdev Controller", 00:24:47.792 "max_namespaces": 10, 00:24:47.792 "min_cntlid": 1, 00:24:47.792 "max_cntlid": 65519, 00:24:47.792 "ana_reporting": false 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_subsystem_add_host", 00:24:47.792 "params": { 00:24:47.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.792 "host": "nqn.2016-06.io.spdk:host1", 00:24:47.792 "psk": "/tmp/tmp.0XcKU4U01A" 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_subsystem_add_ns", 00:24:47.792 "params": { 00:24:47.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.792 "namespace": { 00:24:47.792 "nsid": 1, 00:24:47.792 "bdev_name": "malloc0", 00:24:47.792 "nguid": "5E23B6CA5043422B824AC751F72F2740", 00:24:47.792 "uuid": "5e23b6ca-5043-422b-824a-c751f72f2740", 00:24:47.792 "no_auto_visible": false 00:24:47.792 } 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "nvmf_subsystem_add_listener", 00:24:47.792 "params": { 00:24:47.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.792 "listen_address": { 00:24:47.792 "trtype": "TCP", 00:24:47.792 "adrfam": "IPv4", 00:24:47.792 "traddr": "10.0.0.2", 00:24:47.792 "trsvcid": "4420" 00:24:47.792 }, 00:24:47.792 "secure_channel": true 00:24:47.792 } 00:24:47.792 } 00:24:47.792 ] 00:24:47.792 } 00:24:47.792 ] 00:24:47.792 }' 00:24:47.792 01:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:47.792 01:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:47.792 "subsystems": [ 00:24:47.792 { 00:24:47.792 "subsystem": "keyring", 00:24:47.792 "config": [] 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "subsystem": "iobuf", 00:24:47.792 "config": [ 00:24:47.792 { 00:24:47.792 "method": "iobuf_set_options", 00:24:47.792 "params": { 00:24:47.792 "small_pool_count": 8192, 00:24:47.792 "large_pool_count": 1024, 00:24:47.792 "small_bufsize": 8192, 00:24:47.792 "large_bufsize": 135168 00:24:47.792 } 00:24:47.792 } 00:24:47.792 ] 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "subsystem": "sock", 00:24:47.792 "config": [ 00:24:47.792 { 00:24:47.792 "method": "sock_set_default_impl", 00:24:47.792 "params": { 00:24:47.792 "impl_name": "posix" 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "sock_impl_set_options", 00:24:47.792 "params": { 00:24:47.792 "impl_name": "ssl", 00:24:47.792 "recv_buf_size": 4096, 00:24:47.792 "send_buf_size": 4096, 00:24:47.792 "enable_recv_pipe": true, 00:24:47.792 "enable_quickack": false, 00:24:47.792 "enable_placement_id": 0, 00:24:47.792 "enable_zerocopy_send_server": true, 00:24:47.792 "enable_zerocopy_send_client": false, 00:24:47.792 "zerocopy_threshold": 0, 00:24:47.792 "tls_version": 0, 00:24:47.792 "enable_ktls": false 00:24:47.792 } 00:24:47.792 }, 00:24:47.792 { 00:24:47.792 "method": "sock_impl_set_options", 00:24:47.792 "params": { 00:24:47.792 "impl_name": "posix", 00:24:47.792 "recv_buf_size": 2097152, 00:24:47.792 "send_buf_size": 2097152, 00:24:47.792 "enable_recv_pipe": true, 00:24:47.792 "enable_quickack": false, 00:24:47.792 "enable_placement_id": 0, 00:24:47.792 "enable_zerocopy_send_server": true, 00:24:47.792 "enable_zerocopy_send_client": false, 00:24:47.792 "zerocopy_threshold": 0, 00:24:47.792 "tls_version": 0, 00:24:47.793 "enable_ktls": false 00:24:47.793 } 00:24:47.793 } 00:24:47.793 ] 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "subsystem": "vmd", 00:24:47.793 "config": [] 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "subsystem": "accel", 00:24:47.793 "config": [ 00:24:47.793 { 00:24:47.793 "method": "accel_set_options", 00:24:47.793 "params": { 00:24:47.793 "small_cache_size": 128, 00:24:47.793 "large_cache_size": 16, 00:24:47.793 "task_count": 2048, 00:24:47.793 "sequence_count": 2048, 00:24:47.793 "buf_count": 2048 00:24:47.793 } 00:24:47.793 } 00:24:47.793 ] 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "subsystem": "bdev", 00:24:47.793 "config": [ 00:24:47.793 { 00:24:47.793 "method": "bdev_set_options", 00:24:47.793 "params": { 00:24:47.793 "bdev_io_pool_size": 65535, 00:24:47.793 "bdev_io_cache_size": 256, 00:24:47.793 "bdev_auto_examine": true, 00:24:47.793 "iobuf_small_cache_size": 128, 00:24:47.793 "iobuf_large_cache_size": 16 00:24:47.793 } 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "method": "bdev_raid_set_options", 00:24:47.793 "params": { 00:24:47.793 "process_window_size_kb": 1024 00:24:47.793 } 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "method": "bdev_iscsi_set_options", 00:24:47.793 "params": { 00:24:47.793 "timeout_sec": 30 00:24:47.793 } 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "method": "bdev_nvme_set_options", 00:24:47.793 "params": { 00:24:47.793 "action_on_timeout": "none", 00:24:47.793 "timeout_us": 0, 00:24:47.793 "timeout_admin_us": 0, 00:24:47.793 "keep_alive_timeout_ms": 10000, 00:24:47.793 "arbitration_burst": 0, 00:24:47.793 "low_priority_weight": 0, 00:24:47.793 "medium_priority_weight": 0, 00:24:47.793 "high_priority_weight": 0, 00:24:47.793 "nvme_adminq_poll_period_us": 10000, 00:24:47.793 "nvme_ioq_poll_period_us": 0, 00:24:47.793 "io_queue_requests": 512, 00:24:47.793 "delay_cmd_submit": true, 00:24:47.793 "transport_retry_count": 4, 00:24:47.793 "bdev_retry_count": 3, 00:24:47.793 "transport_ack_timeout": 0, 00:24:47.793 "ctrlr_loss_timeout_sec": 0, 00:24:47.793 "reconnect_delay_sec": 0, 00:24:47.793 "fast_io_fail_timeout_sec": 0, 00:24:47.793 "disable_auto_failback": false, 00:24:47.793 "generate_uuids": false, 00:24:47.793 "transport_tos": 0, 00:24:47.793 "nvme_error_stat": false, 00:24:47.793 "rdma_srq_size": 0, 00:24:47.793 "io_path_stat": false, 00:24:47.793 "allow_accel_sequence": false, 00:24:47.793 "rdma_max_cq_size": 0, 00:24:47.793 "rdma_cm_event_timeout_ms": 0, 00:24:47.793 "dhchap_digests": [ 00:24:47.793 "sha256", 00:24:47.793 "sha384", 00:24:47.793 "sha512" 00:24:47.793 ], 00:24:47.793 "dhchap_dhgroups": [ 00:24:47.793 "null", 00:24:47.793 "ffdhe2048", 00:24:47.793 "ffdhe3072", 00:24:47.793 "ffdhe4096", 00:24:47.793 "ffdhe6144", 00:24:47.793 "ffdhe8192" 00:24:47.793 ] 00:24:47.793 } 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "method": "bdev_nvme_attach_controller", 00:24:47.793 "params": { 00:24:47.793 "name": "TLSTEST", 00:24:47.793 "trtype": "TCP", 00:24:47.793 "adrfam": "IPv4", 00:24:47.793 "traddr": "10.0.0.2", 00:24:47.793 "trsvcid": "4420", 00:24:47.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.793 "prchk_reftag": false, 00:24:47.793 "prchk_guard": false, 00:24:47.793 "ctrlr_loss_timeout_sec": 0, 00:24:47.793 "reconnect_delay_sec": 0, 00:24:47.793 "fast_io_fail_timeout_sec": 0, 00:24:47.793 "psk": "/tmp/tmp.0XcKU4U01A", 00:24:47.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.793 "hdgst": false, 00:24:47.793 "ddgst": false 00:24:47.793 } 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "method": "bdev_nvme_set_hotplug", 00:24:47.793 "params": { 00:24:47.793 "period_us": 100000, 00:24:47.793 "enable": false 00:24:47.793 } 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "method": "bdev_wait_for_examine" 00:24:47.793 } 00:24:47.793 ] 00:24:47.793 }, 00:24:47.793 { 00:24:47.793 "subsystem": "nbd", 00:24:47.793 "config": [] 00:24:47.793 } 00:24:47.793 ] 00:24:47.793 }' 00:24:47.793 01:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4047483 00:24:47.793 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4047483 ']' 00:24:47.793 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4047483 00:24:47.793 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4047483 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4047483' 00:24:48.054 killing process with pid 4047483 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4047483 00:24:48.054 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.054 00:24:48.054 Latency(us) 00:24:48.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.054 =================================================================================================================== 00:24:48.054 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:48.054 [2024-07-12 01:43:14.198752] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4047483 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4047112 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4047112 ']' 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4047112 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4047112 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4047112' 00:24:48.054 killing process with pid 4047112 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4047112 00:24:48.054 [2024-07-12 01:43:14.354572] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:48.054 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4047112 00:24:48.316 01:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:48.316 01:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:48.316 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:48.316 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.316 01:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:48.316 "subsystems": [ 00:24:48.316 { 00:24:48.316 "subsystem": "keyring", 00:24:48.316 "config": [] 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "subsystem": "iobuf", 00:24:48.316 "config": [ 00:24:48.316 { 00:24:48.316 "method": "iobuf_set_options", 00:24:48.316 "params": { 00:24:48.316 "small_pool_count": 8192, 00:24:48.316 "large_pool_count": 1024, 00:24:48.316 "small_bufsize": 8192, 00:24:48.316 "large_bufsize": 135168 00:24:48.316 } 00:24:48.316 } 00:24:48.316 ] 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "subsystem": "sock", 00:24:48.316 "config": [ 00:24:48.316 { 00:24:48.316 "method": "sock_set_default_impl", 00:24:48.316 "params": { 00:24:48.316 "impl_name": "posix" 00:24:48.316 } 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "method": "sock_impl_set_options", 00:24:48.316 "params": { 00:24:48.316 "impl_name": "ssl", 00:24:48.316 "recv_buf_size": 4096, 00:24:48.316 "send_buf_size": 4096, 00:24:48.316 "enable_recv_pipe": true, 00:24:48.316 "enable_quickack": false, 00:24:48.316 "enable_placement_id": 0, 00:24:48.316 "enable_zerocopy_send_server": true, 00:24:48.316 "enable_zerocopy_send_client": false, 00:24:48.316 "zerocopy_threshold": 0, 00:24:48.316 "tls_version": 0, 00:24:48.316 "enable_ktls": false 00:24:48.316 } 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "method": "sock_impl_set_options", 00:24:48.316 "params": { 00:24:48.316 "impl_name": "posix", 00:24:48.316 "recv_buf_size": 2097152, 00:24:48.316 "send_buf_size": 2097152, 00:24:48.316 "enable_recv_pipe": true, 00:24:48.316 "enable_quickack": false, 00:24:48.316 "enable_placement_id": 0, 00:24:48.316 "enable_zerocopy_send_server": true, 00:24:48.316 "enable_zerocopy_send_client": false, 00:24:48.316 "zerocopy_threshold": 0, 00:24:48.316 "tls_version": 0, 00:24:48.316 "enable_ktls": false 00:24:48.316 } 00:24:48.316 } 00:24:48.316 ] 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "subsystem": "vmd", 00:24:48.316 "config": [] 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "subsystem": "accel", 00:24:48.316 "config": [ 00:24:48.316 { 00:24:48.316 "method": "accel_set_options", 00:24:48.316 "params": { 00:24:48.316 "small_cache_size": 128, 00:24:48.316 "large_cache_size": 16, 00:24:48.316 "task_count": 2048, 00:24:48.316 "sequence_count": 2048, 00:24:48.316 "buf_count": 2048 00:24:48.316 } 00:24:48.316 } 00:24:48.316 ] 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "subsystem": "bdev", 00:24:48.316 "config": [ 00:24:48.316 { 00:24:48.316 "method": "bdev_set_options", 00:24:48.316 "params": { 00:24:48.316 "bdev_io_pool_size": 65535, 00:24:48.316 "bdev_io_cache_size": 256, 00:24:48.316 "bdev_auto_examine": true, 00:24:48.316 "iobuf_small_cache_size": 128, 00:24:48.316 "iobuf_large_cache_size": 16 00:24:48.316 } 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "method": "bdev_raid_set_options", 00:24:48.316 "params": { 00:24:48.316 "process_window_size_kb": 1024 00:24:48.316 } 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "method": "bdev_iscsi_set_options", 00:24:48.316 "params": { 00:24:48.316 "timeout_sec": 30 00:24:48.316 } 00:24:48.316 }, 00:24:48.316 { 00:24:48.316 "method": "bdev_nvme_set_options", 00:24:48.316 "params": { 00:24:48.316 "action_on_timeout": "none", 00:24:48.316 "timeout_us": 0, 00:24:48.316 "timeout_admin_us": 0, 00:24:48.316 "keep_alive_timeout_ms": 10000, 00:24:48.316 "arbitration_burst": 0, 00:24:48.316 "low_priority_weight": 0, 00:24:48.316 "medium_priority_weight": 0, 00:24:48.316 "high_priority_weight": 0, 00:24:48.316 "nvme_adminq_poll_period_us": 10000, 00:24:48.316 "nvme_ioq_poll_period_us": 0, 00:24:48.316 "io_queue_requests": 0, 00:24:48.316 "delay_cmd_submit": true, 00:24:48.316 "transport_retry_count": 4, 00:24:48.316 "bdev_retry_count": 3, 00:24:48.316 "transport_ack_timeout": 0, 00:24:48.316 "ctrlr_loss_timeout_sec": 0, 00:24:48.316 "reconnect_delay_sec": 0, 00:24:48.316 "fast_io_fail_timeout_sec": 0, 00:24:48.316 "disable_auto_failback": false, 00:24:48.316 "generate_uuids": false, 00:24:48.316 "transport_tos": 0, 00:24:48.316 "nvme_error_stat": false, 00:24:48.316 "rdma_srq_size": 0, 00:24:48.316 "io_path_stat": false, 00:24:48.316 "allow_accel_sequence": false, 00:24:48.317 "rdma_max_cq_size": 0, 00:24:48.317 "rdma_cm_event_timeout_ms": 0, 00:24:48.317 "dhchap_digests": [ 00:24:48.317 "sha256", 00:24:48.317 "sha384", 00:24:48.317 "sha512" 00:24:48.317 ], 00:24:48.317 "dhchap_dhgroups": [ 00:24:48.317 "null", 00:24:48.317 "ffdhe2048", 00:24:48.317 "ffdhe3072", 00:24:48.317 "ffdhe4096", 00:24:48.317 "ffdhe6144", 00:24:48.317 "ffdhe8192" 00:24:48.317 ] 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "bdev_nvme_set_hotplug", 00:24:48.317 "params": { 00:24:48.317 "period_us": 100000, 00:24:48.317 "enable": false 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "bdev_malloc_create", 00:24:48.317 "params": { 00:24:48.317 "name": "malloc0", 00:24:48.317 "num_blocks": 8192, 00:24:48.317 "block_size": 4096, 00:24:48.317 "physical_block_size": 4096, 00:24:48.317 "uuid": "5e23b6ca-5043-422b-824a-c751f72f2740", 00:24:48.317 "optimal_io_boundary": 0 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "bdev_wait_for_examine" 00:24:48.317 } 00:24:48.317 ] 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "subsystem": "nbd", 00:24:48.317 "config": [] 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "subsystem": "scheduler", 00:24:48.317 "config": [ 00:24:48.317 { 00:24:48.317 "method": "framework_set_scheduler", 00:24:48.317 "params": { 00:24:48.317 "name": "static" 00:24:48.317 } 00:24:48.317 } 00:24:48.317 ] 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "subsystem": "nvmf", 00:24:48.317 "config": [ 00:24:48.317 { 00:24:48.317 "method": "nvmf_set_config", 00:24:48.317 "params": { 00:24:48.317 "discovery_filter": "match_any", 00:24:48.317 "admin_cmd_passthru": { 00:24:48.317 "identify_ctrlr": false 00:24:48.317 } 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_set_max_subsystems", 00:24:48.317 "params": { 00:24:48.317 "max_subsystems": 1024 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_set_crdt", 00:24:48.317 "params": { 00:24:48.317 "crdt1": 0, 00:24:48.317 "crdt2": 0, 00:24:48.317 "crdt3": 0 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_create_transport", 00:24:48.317 "params": { 00:24:48.317 "trtype": "TCP", 00:24:48.317 "max_queue_depth": 128, 00:24:48.317 "max_io_qpairs_per_ctrlr": 127, 00:24:48.317 "in_capsule_data_size": 4096, 00:24:48.317 "max_io_size": 131072, 00:24:48.317 "io_unit_size": 131072, 00:24:48.317 "max_aq_depth": 128, 00:24:48.317 "num_shared_buffers": 511, 00:24:48.317 "buf_cache_size": 4294967295, 00:24:48.317 "dif_insert_or_strip": false, 00:24:48.317 "zcopy": false, 00:24:48.317 "c2h_success": false, 00:24:48.317 "sock_priority": 0, 00:24:48.317 "abort_timeout_sec": 1, 00:24:48.317 "ack_timeout": 0, 00:24:48.317 "data_wr_pool_size": 0 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_create_subsystem", 00:24:48.317 "params": { 00:24:48.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.317 "allow_any_host": false, 00:24:48.317 "serial_number": "SPDK00000000000001", 00:24:48.317 "model_number": "SPDK bdev Controller", 00:24:48.317 "max_namespaces": 10, 00:24:48.317 "min_cntlid": 1, 00:24:48.317 "max_cntlid": 65519, 00:24:48.317 "ana_reporting": false 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_subsystem_add_host", 00:24:48.317 "params": { 00:24:48.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.317 "host": "nqn.2016-06.io.spdk:host1", 00:24:48.317 "psk": "/tmp/tmp.0XcKU4U01A" 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_subsystem_add_ns", 00:24:48.317 "params": { 00:24:48.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.317 "namespace": { 00:24:48.317 "nsid": 1, 00:24:48.317 "bdev_name": "malloc0", 00:24:48.317 "nguid": "5E23B6CA5043422B824AC751F72F2740", 00:24:48.317 "uuid": "5e23b6ca-5043-422b-824a-c751f72f2740", 00:24:48.317 "no_auto_visible": false 00:24:48.317 } 00:24:48.317 } 00:24:48.317 }, 00:24:48.317 { 00:24:48.317 "method": "nvmf_subsystem_add_listener", 00:24:48.317 "params": { 00:24:48.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.317 "listen_address": { 00:24:48.317 "trtype": "TCP", 00:24:48.317 "adrfam": "IPv4", 00:24:48.317 "traddr": "10.0.0.2", 00:24:48.317 "trsvcid": "4420" 00:24:48.317 }, 00:24:48.317 "secure_channel": true 00:24:48.317 } 00:24:48.317 } 00:24:48.317 ] 00:24:48.317 } 00:24:48.317 ] 00:24:48.317 }' 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4047645 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4047645 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4047645 ']' 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:48.317 01:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.317 [2024-07-12 01:43:14.532951] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:48.317 [2024-07-12 01:43:14.533021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.317 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.317 [2024-07-12 01:43:14.622242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.317 [2024-07-12 01:43:14.650642] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.317 [2024-07-12 01:43:14.650677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.317 [2024-07-12 01:43:14.650682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.317 [2024-07-12 01:43:14.650687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.317 [2024-07-12 01:43:14.650691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.317 [2024-07-12 01:43:14.650736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.577 [2024-07-12 01:43:14.829051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.578 [2024-07-12 01:43:14.845025] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:48.578 [2024-07-12 01:43:14.861073] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.578 [2024-07-12 01:43:14.877400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4047989 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4047989 /var/tmp/bdevperf.sock 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4047989 ']' 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.146 01:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:49.146 "subsystems": [ 00:24:49.146 { 00:24:49.146 "subsystem": "keyring", 00:24:49.146 "config": [] 00:24:49.146 }, 00:24:49.146 { 00:24:49.146 "subsystem": "iobuf", 00:24:49.146 "config": [ 00:24:49.146 { 00:24:49.146 "method": "iobuf_set_options", 00:24:49.146 "params": { 00:24:49.146 "small_pool_count": 8192, 00:24:49.146 "large_pool_count": 1024, 00:24:49.146 "small_bufsize": 8192, 00:24:49.146 "large_bufsize": 135168 00:24:49.146 } 00:24:49.146 } 00:24:49.146 ] 00:24:49.146 }, 00:24:49.146 { 00:24:49.146 "subsystem": "sock", 00:24:49.146 "config": [ 00:24:49.146 { 00:24:49.146 "method": "sock_set_default_impl", 00:24:49.146 "params": { 00:24:49.146 "impl_name": "posix" 00:24:49.146 } 00:24:49.146 }, 00:24:49.146 { 00:24:49.146 "method": "sock_impl_set_options", 00:24:49.146 "params": { 00:24:49.146 "impl_name": "ssl", 00:24:49.146 "recv_buf_size": 4096, 00:24:49.146 "send_buf_size": 4096, 00:24:49.146 "enable_recv_pipe": true, 00:24:49.146 "enable_quickack": false, 00:24:49.146 "enable_placement_id": 0, 00:24:49.146 "enable_zerocopy_send_server": true, 00:24:49.146 "enable_zerocopy_send_client": false, 00:24:49.146 "zerocopy_threshold": 0, 00:24:49.146 "tls_version": 0, 00:24:49.146 "enable_ktls": false 00:24:49.146 } 00:24:49.146 }, 00:24:49.146 { 00:24:49.146 "method": "sock_impl_set_options", 00:24:49.146 "params": { 00:24:49.146 "impl_name": "posix", 00:24:49.146 "recv_buf_size": 2097152, 00:24:49.146 "send_buf_size": 2097152, 00:24:49.146 "enable_recv_pipe": true, 00:24:49.146 "enable_quickack": false, 00:24:49.146 "enable_placement_id": 0, 00:24:49.146 "enable_zerocopy_send_server": true, 00:24:49.146 "enable_zerocopy_send_client": false, 00:24:49.147 "zerocopy_threshold": 0, 00:24:49.147 "tls_version": 0, 00:24:49.147 "enable_ktls": false 00:24:49.147 } 00:24:49.147 } 00:24:49.147 ] 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "subsystem": "vmd", 00:24:49.147 "config": [] 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "subsystem": "accel", 00:24:49.147 "config": [ 00:24:49.147 { 00:24:49.147 "method": "accel_set_options", 00:24:49.147 "params": { 00:24:49.147 "small_cache_size": 128, 00:24:49.147 "large_cache_size": 16, 00:24:49.147 "task_count": 2048, 00:24:49.147 "sequence_count": 2048, 00:24:49.147 "buf_count": 2048 00:24:49.147 } 00:24:49.147 } 00:24:49.147 ] 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "subsystem": "bdev", 00:24:49.147 "config": [ 00:24:49.147 { 00:24:49.147 "method": "bdev_set_options", 00:24:49.147 "params": { 00:24:49.147 "bdev_io_pool_size": 65535, 00:24:49.147 "bdev_io_cache_size": 256, 00:24:49.147 "bdev_auto_examine": true, 00:24:49.147 "iobuf_small_cache_size": 128, 00:24:49.147 "iobuf_large_cache_size": 16 00:24:49.147 } 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "method": "bdev_raid_set_options", 00:24:49.147 "params": { 00:24:49.147 "process_window_size_kb": 1024 00:24:49.147 } 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "method": "bdev_iscsi_set_options", 00:24:49.147 "params": { 00:24:49.147 "timeout_sec": 30 00:24:49.147 } 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "method": "bdev_nvme_set_options", 00:24:49.147 "params": { 00:24:49.147 "action_on_timeout": "none", 00:24:49.147 "timeout_us": 0, 00:24:49.147 "timeout_admin_us": 0, 00:24:49.147 "keep_alive_timeout_ms": 10000, 00:24:49.147 "arbitration_burst": 0, 00:24:49.147 "low_priority_weight": 0, 00:24:49.147 "medium_priority_weight": 0, 00:24:49.147 "high_priority_weight": 0, 00:24:49.147 "nvme_adminq_poll_period_us": 10000, 00:24:49.147 "nvme_ioq_poll_period_us": 0, 00:24:49.147 "io_queue_requests": 512, 00:24:49.147 "delay_cmd_submit": true, 00:24:49.147 "transport_retry_count": 4, 00:24:49.147 "bdev_retry_count": 3, 00:24:49.147 "transport_ack_timeout": 0, 00:24:49.147 "ctrlr_loss_timeout_sec": 0, 00:24:49.147 "reconnect_delay_sec": 0, 00:24:49.147 "fast_io_fail_timeout_sec": 0, 00:24:49.147 "disable_auto_failback": false, 00:24:49.147 "generate_uuids": false, 00:24:49.147 "transport_tos": 0, 00:24:49.147 "nvme_error_stat": false, 00:24:49.147 "rdma_srq_size": 0, 00:24:49.147 "io_path_stat": false, 00:24:49.147 "allow_accel_sequence": false, 00:24:49.147 "rdma_max_cq_size": 0, 00:24:49.147 "rdma_cm_event_timeout_ms": 0, 00:24:49.147 "dhchap_digests": [ 00:24:49.147 "sha256", 00:24:49.147 "sha384", 00:24:49.147 "sha512" 00:24:49.147 ], 00:24:49.147 "dhchap_dhgroups": [ 00:24:49.147 "null", 00:24:49.147 "ffdhe2048", 00:24:49.147 "ffdhe3072", 00:24:49.147 "ffdhe4096", 00:24:49.147 "ffdhe6144", 00:24:49.147 "ffdhe8192" 00:24:49.147 ] 00:24:49.147 } 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "method": "bdev_nvme_attach_controller", 00:24:49.147 "params": { 00:24:49.147 "name": "TLSTEST", 00:24:49.147 "trtype": "TCP", 00:24:49.147 "adrfam": "IPv4", 00:24:49.147 "traddr": "10.0.0.2", 00:24:49.147 "trsvcid": "4420", 00:24:49.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.147 "prchk_reftag": false, 00:24:49.147 "prchk_guard": false, 00:24:49.147 "ctrlr_loss_timeout_sec": 0, 00:24:49.147 "reconnect_delay_sec": 0, 00:24:49.147 "fast_io_fail_timeout_sec": 0, 00:24:49.147 "psk": "/tmp/tmp.0XcKU4U01A", 00:24:49.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.147 "hdgst": false, 00:24:49.147 "ddgst": false 00:24:49.147 } 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "method": "bdev_nvme_set_hotplug", 00:24:49.147 "params": { 00:24:49.147 "period_us": 100000, 00:24:49.147 "enable": false 00:24:49.147 } 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "method": "bdev_wait_for_examine" 00:24:49.147 } 00:24:49.147 ] 00:24:49.147 }, 00:24:49.147 { 00:24:49.147 "subsystem": "nbd", 00:24:49.147 "config": [] 00:24:49.147 } 00:24:49.147 ] 00:24:49.147 }' 00:24:49.147 [2024-07-12 01:43:15.366718] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:49.147 [2024-07-12 01:43:15.366770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047989 ] 00:24:49.147 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.147 [2024-07-12 01:43:15.421522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.147 [2024-07-12 01:43:15.449607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.407 [2024-07-12 01:43:15.569054] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.407 [2024-07-12 01:43:15.569115] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:49.979 01:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:49.979 01:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:49.979 01:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:49.979 Running I/O for 10 seconds... 00:24:59.977 00:24:59.977 Latency(us) 00:24:59.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.977 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.977 Verification LBA range: start 0x0 length 0x2000 00:24:59.977 TLSTESTn1 : 10.02 4642.89 18.14 0.00 0.00 27528.27 5543.25 68157.44 00:24:59.977 =================================================================================================================== 00:24:59.977 Total : 4642.89 18.14 0.00 0.00 27528.27 5543.25 68157.44 00:24:59.977 0 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4047989 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4047989 ']' 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4047989 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.977 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4047989 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4047989' 00:25:00.238 killing process with pid 4047989 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4047989 00:25:00.238 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.238 00:25:00.238 Latency(us) 00:25:00.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.238 =================================================================================================================== 00:25:00.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.238 [2024-07-12 01:43:26.340290] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4047989 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4047645 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4047645 ']' 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4047645 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4047645 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4047645' 00:25:00.238 killing process with pid 4047645 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4047645 00:25:00.238 [2024-07-12 01:43:26.500362] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:00.238 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4047645 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4050013 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4050013 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4050013 ']' 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:00.499 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 [2024-07-12 01:43:26.673114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:00.499 [2024-07-12 01:43:26.673166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.499 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.499 [2024-07-12 01:43:26.763632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.499 [2024-07-12 01:43:26.800325] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.499 [2024-07-12 01:43:26.800368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.499 [2024-07-12 01:43:26.800378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.499 [2024-07-12 01:43:26.800386] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.499 [2024-07-12 01:43:26.800393] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.499 [2024-07-12 01:43:26.800424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.0XcKU4U01A 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0XcKU4U01A 00:25:00.760 01:43:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:00.760 [2024-07-12 01:43:27.051745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.760 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:01.021 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:01.021 [2024-07-12 01:43:27.356492] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.021 [2024-07-12 01:43:27.356692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.021 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:01.282 malloc0 00:25:01.282 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0XcKU4U01A 00:25:01.543 [2024-07-12 01:43:27.816507] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4050363 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4050363 /var/tmp/bdevperf.sock 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4050363 ']' 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:01.543 01:43:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.543 [2024-07-12 01:43:27.876796] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:01.543 [2024-07-12 01:43:27.876848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050363 ] 00:25:01.803 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.803 [2024-07-12 01:43:27.957376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.803 [2024-07-12 01:43:27.985931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.376 01:43:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:02.376 01:43:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:02.376 01:43:28 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XcKU4U01A 00:25:02.637 01:43:28 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:02.637 [2024-07-12 01:43:28.915038] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.637 nvme0n1 00:25:02.898 01:43:29 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.898 Running I/O for 1 seconds... 00:25:03.837 00:25:03.837 Latency(us) 00:25:03.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.837 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:03.837 Verification LBA range: start 0x0 length 0x2000 00:25:03.837 nvme0n1 : 1.02 2837.53 11.08 0.00 0.00 44751.92 5188.27 65972.91 00:25:03.837 =================================================================================================================== 00:25:03.837 Total : 2837.53 11.08 0.00 0.00 44751.92 5188.27 65972.91 00:25:03.837 0 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4050363 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4050363 ']' 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4050363 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4050363 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4050363' 00:25:03.837 killing process with pid 4050363 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4050363 00:25:03.837 Received shutdown signal, test time was about 1.000000 seconds 00:25:03.837 00:25:03.837 Latency(us) 00:25:03.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.837 =================================================================================================================== 00:25:03.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.837 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4050363 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4050013 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4050013 ']' 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4050013 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4050013 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4050013' 00:25:04.096 killing process with pid 4050013 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4050013 00:25:04.096 [2024-07-12 01:43:30.320842] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4050013 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:04.096 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.355 01:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4050847 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4050847 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4050847 ']' 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:04.356 01:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.356 [2024-07-12 01:43:30.507035] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:04.356 [2024-07-12 01:43:30.507092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.356 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.356 [2024-07-12 01:43:30.579040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.356 [2024-07-12 01:43:30.610076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.356 [2024-07-12 01:43:30.610114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.356 [2024-07-12 01:43:30.610122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.356 [2024-07-12 01:43:30.610129] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.356 [2024-07-12 01:43:30.610134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.356 [2024-07-12 01:43:30.610157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.925 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:04.925 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:04.925 01:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:04.925 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.925 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.186 [2024-07-12 01:43:31.306591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.186 malloc0 00:25:05.186 [2024-07-12 01:43:31.333325] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.186 [2024-07-12 01:43:31.333517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4051072 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4051072 /var/tmp/bdevperf.sock 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4051072 ']' 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:05.186 01:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.186 [2024-07-12 01:43:31.410135] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:05.186 [2024-07-12 01:43:31.410189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051072 ] 00:25:05.186 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.186 [2024-07-12 01:43:31.489404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.186 [2024-07-12 01:43:31.517900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.127 01:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:06.127 01:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:06.127 01:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XcKU4U01A 00:25:06.127 01:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:06.127 [2024-07-12 01:43:32.462830] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.388 nvme0n1 00:25:06.388 01:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.388 Running I/O for 1 seconds... 00:25:07.328 00:25:07.328 Latency(us) 00:25:07.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.328 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:07.328 Verification LBA range: start 0x0 length 0x2000 00:25:07.328 nvme0n1 : 1.02 3800.81 14.85 0.00 0.00 33373.45 4560.21 82138.45 00:25:07.328 =================================================================================================================== 00:25:07.328 Total : 3800.81 14.85 0.00 0.00 33373.45 4560.21 82138.45 00:25:07.328 0 00:25:07.328 01:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:07.328 01:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.328 01:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.588 01:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.588 01:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:07.588 "subsystems": [ 00:25:07.588 { 00:25:07.588 "subsystem": "keyring", 00:25:07.588 "config": [ 00:25:07.588 { 00:25:07.588 "method": "keyring_file_add_key", 00:25:07.588 "params": { 00:25:07.588 "name": "key0", 00:25:07.588 "path": "/tmp/tmp.0XcKU4U01A" 00:25:07.588 } 00:25:07.588 } 00:25:07.588 ] 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "subsystem": "iobuf", 00:25:07.588 "config": [ 00:25:07.588 { 00:25:07.588 "method": "iobuf_set_options", 00:25:07.588 "params": { 00:25:07.588 "small_pool_count": 8192, 00:25:07.588 "large_pool_count": 1024, 00:25:07.588 "small_bufsize": 8192, 00:25:07.588 "large_bufsize": 135168 00:25:07.588 } 00:25:07.588 } 00:25:07.588 ] 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "subsystem": "sock", 00:25:07.588 "config": [ 00:25:07.588 { 00:25:07.588 "method": "sock_set_default_impl", 00:25:07.588 "params": { 00:25:07.588 "impl_name": "posix" 00:25:07.588 } 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "method": "sock_impl_set_options", 00:25:07.588 "params": { 00:25:07.588 "impl_name": "ssl", 00:25:07.588 "recv_buf_size": 4096, 00:25:07.588 "send_buf_size": 4096, 00:25:07.588 "enable_recv_pipe": true, 00:25:07.588 "enable_quickack": false, 00:25:07.588 "enable_placement_id": 0, 00:25:07.588 "enable_zerocopy_send_server": true, 00:25:07.588 "enable_zerocopy_send_client": false, 00:25:07.588 "zerocopy_threshold": 0, 00:25:07.588 "tls_version": 0, 00:25:07.588 "enable_ktls": false 00:25:07.588 } 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "method": "sock_impl_set_options", 00:25:07.588 "params": { 00:25:07.588 "impl_name": "posix", 00:25:07.588 "recv_buf_size": 2097152, 00:25:07.588 "send_buf_size": 2097152, 00:25:07.588 "enable_recv_pipe": true, 00:25:07.588 "enable_quickack": false, 00:25:07.588 "enable_placement_id": 0, 00:25:07.588 "enable_zerocopy_send_server": true, 00:25:07.588 "enable_zerocopy_send_client": false, 00:25:07.588 "zerocopy_threshold": 0, 00:25:07.588 "tls_version": 0, 00:25:07.588 "enable_ktls": false 00:25:07.588 } 00:25:07.588 } 00:25:07.588 ] 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "subsystem": "vmd", 00:25:07.588 "config": [] 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "subsystem": "accel", 00:25:07.588 "config": [ 00:25:07.588 { 00:25:07.588 "method": "accel_set_options", 00:25:07.588 "params": { 00:25:07.588 "small_cache_size": 128, 00:25:07.588 "large_cache_size": 16, 00:25:07.588 "task_count": 2048, 00:25:07.588 "sequence_count": 2048, 00:25:07.588 "buf_count": 2048 00:25:07.588 } 00:25:07.588 } 00:25:07.588 ] 00:25:07.588 }, 00:25:07.588 { 00:25:07.588 "subsystem": "bdev", 00:25:07.588 "config": [ 00:25:07.588 { 00:25:07.589 "method": "bdev_set_options", 00:25:07.589 "params": { 00:25:07.589 "bdev_io_pool_size": 65535, 00:25:07.589 "bdev_io_cache_size": 256, 00:25:07.589 "bdev_auto_examine": true, 00:25:07.589 "iobuf_small_cache_size": 128, 00:25:07.589 "iobuf_large_cache_size": 16 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "bdev_raid_set_options", 00:25:07.589 "params": { 00:25:07.589 "process_window_size_kb": 1024 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "bdev_iscsi_set_options", 00:25:07.589 "params": { 00:25:07.589 "timeout_sec": 30 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "bdev_nvme_set_options", 00:25:07.589 "params": { 00:25:07.589 "action_on_timeout": "none", 00:25:07.589 "timeout_us": 0, 00:25:07.589 "timeout_admin_us": 0, 00:25:07.589 "keep_alive_timeout_ms": 10000, 00:25:07.589 "arbitration_burst": 0, 00:25:07.589 "low_priority_weight": 0, 00:25:07.589 "medium_priority_weight": 0, 00:25:07.589 "high_priority_weight": 0, 00:25:07.589 "nvme_adminq_poll_period_us": 10000, 00:25:07.589 "nvme_ioq_poll_period_us": 0, 00:25:07.589 "io_queue_requests": 0, 00:25:07.589 "delay_cmd_submit": true, 00:25:07.589 "transport_retry_count": 4, 00:25:07.589 "bdev_retry_count": 3, 00:25:07.589 "transport_ack_timeout": 0, 00:25:07.589 "ctrlr_loss_timeout_sec": 0, 00:25:07.589 "reconnect_delay_sec": 0, 00:25:07.589 "fast_io_fail_timeout_sec": 0, 00:25:07.589 "disable_auto_failback": false, 00:25:07.589 "generate_uuids": false, 00:25:07.589 "transport_tos": 0, 00:25:07.589 "nvme_error_stat": false, 00:25:07.589 "rdma_srq_size": 0, 00:25:07.589 "io_path_stat": false, 00:25:07.589 "allow_accel_sequence": false, 00:25:07.589 "rdma_max_cq_size": 0, 00:25:07.589 "rdma_cm_event_timeout_ms": 0, 00:25:07.589 "dhchap_digests": [ 00:25:07.589 "sha256", 00:25:07.589 "sha384", 00:25:07.589 "sha512" 00:25:07.589 ], 00:25:07.589 "dhchap_dhgroups": [ 00:25:07.589 "null", 00:25:07.589 "ffdhe2048", 00:25:07.589 "ffdhe3072", 00:25:07.589 "ffdhe4096", 00:25:07.589 "ffdhe6144", 00:25:07.589 "ffdhe8192" 00:25:07.589 ] 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "bdev_nvme_set_hotplug", 00:25:07.589 "params": { 00:25:07.589 "period_us": 100000, 00:25:07.589 "enable": false 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "bdev_malloc_create", 00:25:07.589 "params": { 00:25:07.589 "name": "malloc0", 00:25:07.589 "num_blocks": 8192, 00:25:07.589 "block_size": 4096, 00:25:07.589 "physical_block_size": 4096, 00:25:07.589 "uuid": "9dd6455d-8175-4618-a39d-bf7607d253c4", 00:25:07.589 "optimal_io_boundary": 0 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "bdev_wait_for_examine" 00:25:07.589 } 00:25:07.589 ] 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "subsystem": "nbd", 00:25:07.589 "config": [] 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "subsystem": "scheduler", 00:25:07.589 "config": [ 00:25:07.589 { 00:25:07.589 "method": "framework_set_scheduler", 00:25:07.589 "params": { 00:25:07.589 "name": "static" 00:25:07.589 } 00:25:07.589 } 00:25:07.589 ] 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "subsystem": "nvmf", 00:25:07.589 "config": [ 00:25:07.589 { 00:25:07.589 "method": "nvmf_set_config", 00:25:07.589 "params": { 00:25:07.589 "discovery_filter": "match_any", 00:25:07.589 "admin_cmd_passthru": { 00:25:07.589 "identify_ctrlr": false 00:25:07.589 } 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_set_max_subsystems", 00:25:07.589 "params": { 00:25:07.589 "max_subsystems": 1024 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_set_crdt", 00:25:07.589 "params": { 00:25:07.589 "crdt1": 0, 00:25:07.589 "crdt2": 0, 00:25:07.589 "crdt3": 0 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_create_transport", 00:25:07.589 "params": { 00:25:07.589 "trtype": "TCP", 00:25:07.589 "max_queue_depth": 128, 00:25:07.589 "max_io_qpairs_per_ctrlr": 127, 00:25:07.589 "in_capsule_data_size": 4096, 00:25:07.589 "max_io_size": 131072, 00:25:07.589 "io_unit_size": 131072, 00:25:07.589 "max_aq_depth": 128, 00:25:07.589 "num_shared_buffers": 511, 00:25:07.589 "buf_cache_size": 4294967295, 00:25:07.589 "dif_insert_or_strip": false, 00:25:07.589 "zcopy": false, 00:25:07.589 "c2h_success": false, 00:25:07.589 "sock_priority": 0, 00:25:07.589 "abort_timeout_sec": 1, 00:25:07.589 "ack_timeout": 0, 00:25:07.589 "data_wr_pool_size": 0 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_create_subsystem", 00:25:07.589 "params": { 00:25:07.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.589 "allow_any_host": false, 00:25:07.589 "serial_number": "00000000000000000000", 00:25:07.589 "model_number": "SPDK bdev Controller", 00:25:07.589 "max_namespaces": 32, 00:25:07.589 "min_cntlid": 1, 00:25:07.589 "max_cntlid": 65519, 00:25:07.589 "ana_reporting": false 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_subsystem_add_host", 00:25:07.589 "params": { 00:25:07.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.589 "host": "nqn.2016-06.io.spdk:host1", 00:25:07.589 "psk": "key0" 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_subsystem_add_ns", 00:25:07.589 "params": { 00:25:07.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.589 "namespace": { 00:25:07.589 "nsid": 1, 00:25:07.589 "bdev_name": "malloc0", 00:25:07.589 "nguid": "9DD6455D81754618A39DBF7607D253C4", 00:25:07.589 "uuid": "9dd6455d-8175-4618-a39d-bf7607d253c4", 00:25:07.589 "no_auto_visible": false 00:25:07.589 } 00:25:07.589 } 00:25:07.589 }, 00:25:07.589 { 00:25:07.589 "method": "nvmf_subsystem_add_listener", 00:25:07.589 "params": { 00:25:07.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.589 "listen_address": { 00:25:07.589 "trtype": "TCP", 00:25:07.589 "adrfam": "IPv4", 00:25:07.589 "traddr": "10.0.0.2", 00:25:07.589 "trsvcid": "4420" 00:25:07.589 }, 00:25:07.589 "secure_channel": true 00:25:07.589 } 00:25:07.589 } 00:25:07.589 ] 00:25:07.589 } 00:25:07.589 ] 00:25:07.589 }' 00:25:07.589 01:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:07.851 01:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:07.851 "subsystems": [ 00:25:07.851 { 00:25:07.851 "subsystem": "keyring", 00:25:07.851 "config": [ 00:25:07.851 { 00:25:07.851 "method": "keyring_file_add_key", 00:25:07.851 "params": { 00:25:07.851 "name": "key0", 00:25:07.851 "path": "/tmp/tmp.0XcKU4U01A" 00:25:07.851 } 00:25:07.851 } 00:25:07.851 ] 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "subsystem": "iobuf", 00:25:07.851 "config": [ 00:25:07.851 { 00:25:07.851 "method": "iobuf_set_options", 00:25:07.851 "params": { 00:25:07.851 "small_pool_count": 8192, 00:25:07.851 "large_pool_count": 1024, 00:25:07.851 "small_bufsize": 8192, 00:25:07.851 "large_bufsize": 135168 00:25:07.851 } 00:25:07.851 } 00:25:07.851 ] 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "subsystem": "sock", 00:25:07.851 "config": [ 00:25:07.851 { 00:25:07.851 "method": "sock_set_default_impl", 00:25:07.851 "params": { 00:25:07.851 "impl_name": "posix" 00:25:07.851 } 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "method": "sock_impl_set_options", 00:25:07.851 "params": { 00:25:07.851 "impl_name": "ssl", 00:25:07.851 "recv_buf_size": 4096, 00:25:07.851 "send_buf_size": 4096, 00:25:07.851 "enable_recv_pipe": true, 00:25:07.851 "enable_quickack": false, 00:25:07.851 "enable_placement_id": 0, 00:25:07.851 "enable_zerocopy_send_server": true, 00:25:07.851 "enable_zerocopy_send_client": false, 00:25:07.851 "zerocopy_threshold": 0, 00:25:07.851 "tls_version": 0, 00:25:07.851 "enable_ktls": false 00:25:07.851 } 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "method": "sock_impl_set_options", 00:25:07.851 "params": { 00:25:07.851 "impl_name": "posix", 00:25:07.851 "recv_buf_size": 2097152, 00:25:07.851 "send_buf_size": 2097152, 00:25:07.851 "enable_recv_pipe": true, 00:25:07.851 "enable_quickack": false, 00:25:07.851 "enable_placement_id": 0, 00:25:07.851 "enable_zerocopy_send_server": true, 00:25:07.851 "enable_zerocopy_send_client": false, 00:25:07.851 "zerocopy_threshold": 0, 00:25:07.851 "tls_version": 0, 00:25:07.851 "enable_ktls": false 00:25:07.851 } 00:25:07.851 } 00:25:07.851 ] 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "subsystem": "vmd", 00:25:07.851 "config": [] 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "subsystem": "accel", 00:25:07.851 "config": [ 00:25:07.851 { 00:25:07.851 "method": "accel_set_options", 00:25:07.851 "params": { 00:25:07.851 "small_cache_size": 128, 00:25:07.851 "large_cache_size": 16, 00:25:07.851 "task_count": 2048, 00:25:07.851 "sequence_count": 2048, 00:25:07.851 "buf_count": 2048 00:25:07.851 } 00:25:07.851 } 00:25:07.851 ] 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "subsystem": "bdev", 00:25:07.851 "config": [ 00:25:07.851 { 00:25:07.851 "method": "bdev_set_options", 00:25:07.851 "params": { 00:25:07.851 "bdev_io_pool_size": 65535, 00:25:07.851 "bdev_io_cache_size": 256, 00:25:07.851 "bdev_auto_examine": true, 00:25:07.851 "iobuf_small_cache_size": 128, 00:25:07.851 "iobuf_large_cache_size": 16 00:25:07.851 } 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "method": "bdev_raid_set_options", 00:25:07.851 "params": { 00:25:07.851 "process_window_size_kb": 1024 00:25:07.851 } 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "method": "bdev_iscsi_set_options", 00:25:07.851 "params": { 00:25:07.851 "timeout_sec": 30 00:25:07.851 } 00:25:07.851 }, 00:25:07.851 { 00:25:07.851 "method": "bdev_nvme_set_options", 00:25:07.851 "params": { 00:25:07.851 "action_on_timeout": "none", 00:25:07.851 "timeout_us": 0, 00:25:07.851 "timeout_admin_us": 0, 00:25:07.851 "keep_alive_timeout_ms": 10000, 00:25:07.851 "arbitration_burst": 0, 00:25:07.851 "low_priority_weight": 0, 00:25:07.851 "medium_priority_weight": 0, 00:25:07.851 "high_priority_weight": 0, 00:25:07.851 "nvme_adminq_poll_period_us": 10000, 00:25:07.851 "nvme_ioq_poll_period_us": 0, 00:25:07.851 "io_queue_requests": 512, 00:25:07.851 "delay_cmd_submit": true, 00:25:07.851 "transport_retry_count": 4, 00:25:07.851 "bdev_retry_count": 3, 00:25:07.852 "transport_ack_timeout": 0, 00:25:07.852 "ctrlr_loss_timeout_sec": 0, 00:25:07.852 "reconnect_delay_sec": 0, 00:25:07.852 "fast_io_fail_timeout_sec": 0, 00:25:07.852 "disable_auto_failback": false, 00:25:07.852 "generate_uuids": false, 00:25:07.852 "transport_tos": 0, 00:25:07.852 "nvme_error_stat": false, 00:25:07.852 "rdma_srq_size": 0, 00:25:07.852 "io_path_stat": false, 00:25:07.852 "allow_accel_sequence": false, 00:25:07.852 "rdma_max_cq_size": 0, 00:25:07.852 "rdma_cm_event_timeout_ms": 0, 00:25:07.852 "dhchap_digests": [ 00:25:07.852 "sha256", 00:25:07.852 "sha384", 00:25:07.852 "sha512" 00:25:07.852 ], 00:25:07.852 "dhchap_dhgroups": [ 00:25:07.852 "null", 00:25:07.852 "ffdhe2048", 00:25:07.852 "ffdhe3072", 00:25:07.852 "ffdhe4096", 00:25:07.852 "ffdhe6144", 00:25:07.852 "ffdhe8192" 00:25:07.852 ] 00:25:07.852 } 00:25:07.852 }, 00:25:07.852 { 00:25:07.852 "method": "bdev_nvme_attach_controller", 00:25:07.852 "params": { 00:25:07.852 "name": "nvme0", 00:25:07.852 "trtype": "TCP", 00:25:07.852 "adrfam": "IPv4", 00:25:07.852 "traddr": "10.0.0.2", 00:25:07.852 "trsvcid": "4420", 00:25:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.852 "prchk_reftag": false, 00:25:07.852 "prchk_guard": false, 00:25:07.852 "ctrlr_loss_timeout_sec": 0, 00:25:07.852 "reconnect_delay_sec": 0, 00:25:07.852 "fast_io_fail_timeout_sec": 0, 00:25:07.852 "psk": "key0", 00:25:07.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.852 "hdgst": false, 00:25:07.852 "ddgst": false 00:25:07.852 } 00:25:07.852 }, 00:25:07.852 { 00:25:07.852 "method": "bdev_nvme_set_hotplug", 00:25:07.852 "params": { 00:25:07.852 "period_us": 100000, 00:25:07.852 "enable": false 00:25:07.852 } 00:25:07.852 }, 00:25:07.852 { 00:25:07.852 "method": "bdev_enable_histogram", 00:25:07.852 "params": { 00:25:07.852 "name": "nvme0n1", 00:25:07.852 "enable": true 00:25:07.852 } 00:25:07.852 }, 00:25:07.852 { 00:25:07.852 "method": "bdev_wait_for_examine" 00:25:07.852 } 00:25:07.852 ] 00:25:07.852 }, 00:25:07.852 { 00:25:07.852 "subsystem": "nbd", 00:25:07.852 "config": [] 00:25:07.852 } 00:25:07.852 ] 00:25:07.852 }' 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4051072 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4051072 ']' 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4051072 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4051072 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4051072' 00:25:07.852 killing process with pid 4051072 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4051072 00:25:07.852 Received shutdown signal, test time was about 1.000000 seconds 00:25:07.852 00:25:07.852 Latency(us) 00:25:07.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.852 =================================================================================================================== 00:25:07.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4051072 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4050847 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4050847 ']' 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4050847 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.852 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4050847 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4050847' 00:25:08.113 killing process with pid 4050847 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4050847 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4050847 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:08.113 01:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:08.113 "subsystems": [ 00:25:08.113 { 00:25:08.113 "subsystem": "keyring", 00:25:08.113 "config": [ 00:25:08.113 { 00:25:08.113 "method": "keyring_file_add_key", 00:25:08.113 "params": { 00:25:08.113 "name": "key0", 00:25:08.113 "path": "/tmp/tmp.0XcKU4U01A" 00:25:08.113 } 00:25:08.113 } 00:25:08.113 ] 00:25:08.113 }, 00:25:08.113 { 00:25:08.113 "subsystem": "iobuf", 00:25:08.113 "config": [ 00:25:08.113 { 00:25:08.113 "method": "iobuf_set_options", 00:25:08.113 "params": { 00:25:08.113 "small_pool_count": 8192, 00:25:08.113 "large_pool_count": 1024, 00:25:08.113 "small_bufsize": 8192, 00:25:08.113 "large_bufsize": 135168 00:25:08.113 } 00:25:08.113 } 00:25:08.113 ] 00:25:08.113 }, 00:25:08.113 { 00:25:08.113 "subsystem": "sock", 00:25:08.113 "config": [ 00:25:08.113 { 00:25:08.113 "method": "sock_set_default_impl", 00:25:08.113 "params": { 00:25:08.113 "impl_name": "posix" 00:25:08.113 } 00:25:08.113 }, 00:25:08.113 { 00:25:08.113 "method": "sock_impl_set_options", 00:25:08.113 "params": { 00:25:08.113 "impl_name": "ssl", 00:25:08.113 "recv_buf_size": 4096, 00:25:08.113 "send_buf_size": 4096, 00:25:08.113 "enable_recv_pipe": true, 00:25:08.113 "enable_quickack": false, 00:25:08.113 "enable_placement_id": 0, 00:25:08.113 "enable_zerocopy_send_server": true, 00:25:08.113 "enable_zerocopy_send_client": false, 00:25:08.113 "zerocopy_threshold": 0, 00:25:08.113 "tls_version": 0, 00:25:08.113 "enable_ktls": false 00:25:08.113 } 00:25:08.113 }, 00:25:08.113 { 00:25:08.113 "method": "sock_impl_set_options", 00:25:08.113 "params": { 00:25:08.113 "impl_name": "posix", 00:25:08.113 "recv_buf_size": 2097152, 00:25:08.113 "send_buf_size": 2097152, 00:25:08.113 "enable_recv_pipe": true, 00:25:08.113 "enable_quickack": false, 00:25:08.113 "enable_placement_id": 0, 00:25:08.113 "enable_zerocopy_send_server": true, 00:25:08.113 "enable_zerocopy_send_client": false, 00:25:08.113 "zerocopy_threshold": 0, 00:25:08.113 "tls_version": 0, 00:25:08.113 "enable_ktls": false 00:25:08.113 } 00:25:08.113 } 00:25:08.113 ] 00:25:08.113 }, 00:25:08.113 { 00:25:08.113 "subsystem": "vmd", 00:25:08.113 "config": [] 00:25:08.113 }, 00:25:08.113 { 00:25:08.113 "subsystem": "accel", 00:25:08.113 "config": [ 00:25:08.113 { 00:25:08.113 "method": "accel_set_options", 00:25:08.113 "params": { 00:25:08.113 "small_cache_size": 128, 00:25:08.113 "large_cache_size": 16, 00:25:08.113 "task_count": 2048, 00:25:08.113 "sequence_count": 2048, 00:25:08.113 "buf_count": 2048 00:25:08.113 } 00:25:08.113 } 00:25:08.113 ] 00:25:08.113 }, 00:25:08.114 { 00:25:08.114 "subsystem": "bdev", 00:25:08.114 "config": [ 00:25:08.114 { 00:25:08.114 "method": "bdev_set_options", 00:25:08.114 "params": { 00:25:08.114 "bdev_io_pool_size": 65535, 00:25:08.114 "bdev_io_cache_size": 256, 00:25:08.114 "bdev_auto_examine": true, 00:25:08.114 "iobuf_small_cache_size": 128, 00:25:08.114 "iobuf_large_cache_size": 16 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "bdev_raid_set_options", 00:25:08.114 "params": { 00:25:08.114 "process_window_size_kb": 1024 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "bdev_iscsi_set_options", 00:25:08.114 "params": { 00:25:08.114 "timeout_sec": 30 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "bdev_nvme_set_options", 00:25:08.114 "params": { 00:25:08.114 "action_on_timeout": "none", 00:25:08.114 "timeout_us": 0, 00:25:08.114 "timeout_admin_us": 0, 00:25:08.114 "keep_alive_timeout_ms": 10000, 00:25:08.114 "arbitration_burst": 0, 00:25:08.114 "low_priority_weight": 0, 00:25:08.114 "medium_priority_weight": 0, 00:25:08.114 "high_priority_weight": 0, 00:25:08.114 "nvme_adminq_poll_period_us": 10000, 00:25:08.114 "nvme_ioq_poll_period_us": 0, 00:25:08.114 "io_queue_requests": 0, 00:25:08.114 "delay_cmd_submit": true, 00:25:08.114 "transport_retry_count": 4, 00:25:08.114 "bdev_retry_count": 3, 00:25:08.114 "transport_ack_timeout": 0, 00:25:08.114 "ctrlr_loss_timeout_sec": 0, 00:25:08.114 "reconnect_delay_sec": 0, 00:25:08.114 "fast_io_fail_timeout_sec": 0, 00:25:08.114 "disable_auto_failback": false, 00:25:08.114 "generate_uuids": false, 00:25:08.114 "transport_tos": 0, 00:25:08.114 "nvme_error_stat": false, 00:25:08.114 "rdma_srq_size": 0, 00:25:08.114 "io_path_stat": false, 00:25:08.114 "allow_accel_sequence": false, 00:25:08.114 "rdma_max_cq_size": 0, 00:25:08.114 "rdma_cm_event_timeout_ms": 0, 00:25:08.114 "dhchap_digests": [ 00:25:08.114 "sha256", 00:25:08.114 "sha384", 00:25:08.114 "sha512" 00:25:08.114 ], 00:25:08.114 "dhchap_dhgroups": [ 00:25:08.114 "null", 00:25:08.114 "ffdhe2048", 00:25:08.114 "ffdhe3072", 00:25:08.114 "ffdhe4096", 00:25:08.114 "ffdhe6144", 00:25:08.114 "ffdhe8192" 00:25:08.114 ] 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "bdev_nvme_set_hotplug", 00:25:08.114 "params": { 00:25:08.114 "period_us": 100000, 00:25:08.114 "enable": false 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "bdev_malloc_create", 00:25:08.114 "params": { 00:25:08.114 "name": "malloc0", 00:25:08.114 "num_blocks": 8192, 00:25:08.114 "block_size": 4096, 00:25:08.114 "physical_block_size": 4096, 00:25:08.114 "uuid": "9dd6455d-8175-4618-a39d-bf7607d253c4", 00:25:08.114 "optimal_io_boundary": 0 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "bdev_wait_for_examine" 00:25:08.114 } 00:25:08.114 ] 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "subsystem": "nbd", 00:25:08.114 "config": [] 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "subsystem": "scheduler", 00:25:08.114 "config": [ 00:25:08.114 { 00:25:08.114 "method": "framework_set_scheduler", 00:25:08.114 "params": { 00:25:08.114 "name": "static" 00:25:08.114 } 00:25:08.114 } 00:25:08.114 ] 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "subsystem": "nvmf", 00:25:08.114 "config": [ 00:25:08.114 { 00:25:08.114 "method": "nvmf_set_config", 00:25:08.114 "params": { 00:25:08.114 "discovery_filter": "match_any", 00:25:08.114 "admin_cmd_passthru": { 00:25:08.114 "identify_ctrlr": false 00:25:08.114 } 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_set_max_subsystems", 00:25:08.114 "params": { 00:25:08.114 "max_subsystems": 1024 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_set_crdt", 00:25:08.114 "params": { 00:25:08.114 "crdt1": 0, 00:25:08.114 "crdt2": 0, 00:25:08.114 "crdt3": 0 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_create_transport", 00:25:08.114 "params": { 00:25:08.114 "trtype": "TCP", 00:25:08.114 "max_queue_depth": 128, 00:25:08.114 "max_io_qpairs_per_ctrlr": 127, 00:25:08.114 "in_capsule_data_size": 4096, 00:25:08.114 "max_io_size": 131072, 00:25:08.114 "io_unit_size": 131072, 00:25:08.114 "max_aq_depth": 128, 00:25:08.114 "num_shared_buffers": 511, 00:25:08.114 "buf_cache_size": 4294967295, 00:25:08.114 "dif_insert_or_strip": false, 00:25:08.114 "zcopy": false, 00:25:08.114 "c2h_success": false, 00:25:08.114 "sock_priority": 0, 00:25:08.114 "abort_timeout_sec": 1, 00:25:08.114 "ack_timeout": 0, 00:25:08.114 "data_wr_pool_size": 0 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_create_subsystem", 00:25:08.114 "params": { 00:25:08.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.114 00:25:08.114 "allow_any_host": false, 00:25:08.114 "serial_number": "00000000000000000000", 00:25:08.114 "model_number": "SPDK bdev Controller", 00:25:08.114 "max_namespaces": 32, 00:25:08.114 "min_cntlid": 1, 00:25:08.114 "max_cntlid": 65519, 00:25:08.114 "ana_reporting": false 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_subsystem_add_host", 00:25:08.114 "params": { 00:25:08.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.114 "host": "nqn.2016-06.io.spdk:host1", 00:25:08.114 "psk": "key0" 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_subsystem_add_ns", 00:25:08.114 "params": { 00:25:08.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.114 "namespace": { 00:25:08.114 "nsid": 1, 00:25:08.114 "bdev_name": "malloc0", 00:25:08.114 "nguid": "9DD6455D81754618A39DBF7607D253C4", 00:25:08.114 "uuid": "9dd6455d-8175-4618-a39d-bf7607d253c4", 00:25:08.114 "no_auto_visible": false 00:25:08.114 } 00:25:08.114 } 00:25:08.114 }, 00:25:08.114 { 00:25:08.114 "method": "nvmf_subsystem_add_listener", 00:25:08.114 "params": { 00:25:08.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.114 "listen_address": { 00:25:08.114 "trtype": "TCP", 00:25:08.114 "adrfam": "IPv4", 00:25:08.114 "traddr": "10.0.0.2", 00:25:08.114 "trsvcid": "4420" 00:25:08.114 }, 00:25:08.114 "secure_channel": true 00:25:08.114 } 00:25:08.114 } 00:25:08.114 ] 00:25:08.114 } 00:25:08.114 ] 00:25:08.114 }' 00:25:08.114 01:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4051749 00:25:08.114 01:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4051749 00:25:08.114 01:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:08.114 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4051749 ']' 00:25:08.115 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.115 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:08.115 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.115 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:08.115 01:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.115 [2024-07-12 01:43:34.435726] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:08.115 [2024-07-12 01:43:34.435819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.376 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.376 [2024-07-12 01:43:34.512350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.376 [2024-07-12 01:43:34.544329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.376 [2024-07-12 01:43:34.544370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.376 [2024-07-12 01:43:34.544379] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.376 [2024-07-12 01:43:34.544385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.376 [2024-07-12 01:43:34.544391] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.376 [2024-07-12 01:43:34.544454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.636 [2024-07-12 01:43:34.735262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.636 [2024-07-12 01:43:34.767267] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:08.636 [2024-07-12 01:43:34.776526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4051785 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4051785 /var/tmp/bdevperf.sock 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4051785 ']' 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.897 01:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:08.897 "subsystems": [ 00:25:08.897 { 00:25:08.897 "subsystem": "keyring", 00:25:08.897 "config": [ 00:25:08.897 { 00:25:08.897 "method": "keyring_file_add_key", 00:25:08.897 "params": { 00:25:08.897 "name": "key0", 00:25:08.897 "path": "/tmp/tmp.0XcKU4U01A" 00:25:08.897 } 00:25:08.897 } 00:25:08.897 ] 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "subsystem": "iobuf", 00:25:08.897 "config": [ 00:25:08.897 { 00:25:08.897 "method": "iobuf_set_options", 00:25:08.897 "params": { 00:25:08.897 "small_pool_count": 8192, 00:25:08.897 "large_pool_count": 1024, 00:25:08.897 "small_bufsize": 8192, 00:25:08.897 "large_bufsize": 135168 00:25:08.897 } 00:25:08.897 } 00:25:08.897 ] 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "subsystem": "sock", 00:25:08.897 "config": [ 00:25:08.897 { 00:25:08.897 "method": "sock_set_default_impl", 00:25:08.897 "params": { 00:25:08.897 "impl_name": "posix" 00:25:08.897 } 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "method": "sock_impl_set_options", 00:25:08.897 "params": { 00:25:08.897 "impl_name": "ssl", 00:25:08.897 "recv_buf_size": 4096, 00:25:08.897 "send_buf_size": 4096, 00:25:08.897 "enable_recv_pipe": true, 00:25:08.897 "enable_quickack": false, 00:25:08.897 "enable_placement_id": 0, 00:25:08.897 "enable_zerocopy_send_server": true, 00:25:08.897 "enable_zerocopy_send_client": false, 00:25:08.897 "zerocopy_threshold": 0, 00:25:08.897 "tls_version": 0, 00:25:08.897 "enable_ktls": false 00:25:08.897 } 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "method": "sock_impl_set_options", 00:25:08.897 "params": { 00:25:08.897 "impl_name": "posix", 00:25:08.897 "recv_buf_size": 2097152, 00:25:08.897 "send_buf_size": 2097152, 00:25:08.897 "enable_recv_pipe": true, 00:25:08.897 "enable_quickack": false, 00:25:08.897 "enable_placement_id": 0, 00:25:08.897 "enable_zerocopy_send_server": true, 00:25:08.897 "enable_zerocopy_send_client": false, 00:25:08.897 "zerocopy_threshold": 0, 00:25:08.897 "tls_version": 0, 00:25:08.897 "enable_ktls": false 00:25:08.897 } 00:25:08.897 } 00:25:08.897 ] 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "subsystem": "vmd", 00:25:08.897 "config": [] 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "subsystem": "accel", 00:25:08.897 "config": [ 00:25:08.897 { 00:25:08.897 "method": "accel_set_options", 00:25:08.897 "params": { 00:25:08.897 "small_cache_size": 128, 00:25:08.897 "large_cache_size": 16, 00:25:08.897 "task_count": 2048, 00:25:08.897 "sequence_count": 2048, 00:25:08.897 "buf_count": 2048 00:25:08.897 } 00:25:08.897 } 00:25:08.897 ] 00:25:08.897 }, 00:25:08.897 { 00:25:08.897 "subsystem": "bdev", 00:25:08.897 "config": [ 00:25:08.897 { 00:25:08.897 "method": "bdev_set_options", 00:25:08.897 "params": { 00:25:08.897 "bdev_io_pool_size": 65535, 00:25:08.897 "bdev_io_cache_size": 256, 00:25:08.897 "bdev_auto_examine": true, 00:25:08.897 "iobuf_small_cache_size": 128, 00:25:08.898 "iobuf_large_cache_size": 16 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_raid_set_options", 00:25:08.898 "params": { 00:25:08.898 "process_window_size_kb": 1024 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_iscsi_set_options", 00:25:08.898 "params": { 00:25:08.898 "timeout_sec": 30 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_nvme_set_options", 00:25:08.898 "params": { 00:25:08.898 "action_on_timeout": "none", 00:25:08.898 "timeout_us": 0, 00:25:08.898 "timeout_admin_us": 0, 00:25:08.898 "keep_alive_timeout_ms": 10000, 00:25:08.898 "arbitration_burst": 0, 00:25:08.898 "low_priority_weight": 0, 00:25:08.898 "medium_priority_weight": 0, 00:25:08.898 "high_priority_weight": 0, 00:25:08.898 "nvme_adminq_poll_period_us": 10000, 00:25:08.898 "nvme_ioq_poll_period_us": 0, 00:25:08.898 "io_queue_requests": 512, 00:25:08.898 "delay_cmd_submit": true, 00:25:08.898 "transport_retry_count": 4, 00:25:08.898 "bdev_retry_count": 3, 00:25:08.898 "transport_ack_timeout": 0, 00:25:08.898 "ctrlr_loss_timeout_sec": 0, 00:25:08.898 "reconnect_delay_sec": 0, 00:25:08.898 "fast_io_fail_timeout_sec": 0, 00:25:08.898 "disable_auto_failback": false, 00:25:08.898 "generate_uuids": false, 00:25:08.898 "transport_tos": 0, 00:25:08.898 "nvme_error_stat": false, 00:25:08.898 "rdma_srq_size": 0, 00:25:08.898 "io_path_stat": false, 00:25:08.898 "allow_accel_sequence": false, 00:25:08.898 "rdma_max_cq_size": 0, 00:25:08.898 "rdma_cm_event_timeout_ms": 0, 00:25:08.898 "dhchap_digests": [ 00:25:08.898 "sha256", 00:25:08.898 "sha384", 00:25:08.898 "sha512" 00:25:08.898 ], 00:25:08.898 "dhchap_dhgroups": [ 00:25:08.898 "null", 00:25:08.898 "ffdhe2048", 00:25:08.898 "ffdhe3072", 00:25:08.898 "ffdhe4096", 00:25:08.898 "ffdhe6144", 00:25:08.898 "ffdhe8192" 00:25:08.898 ] 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_nvme_attach_controller", 00:25:08.898 "params": { 00:25:08.898 "name": "nvme0", 00:25:08.898 "trtype": "TCP", 00:25:08.898 "adrfam": "IPv4", 00:25:08.898 "traddr": "10.0.0.2", 00:25:08.898 "trsvcid": "4420", 00:25:08.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.898 "prchk_reftag": false, 00:25:08.898 "prchk_guard": false, 00:25:08.898 "ctrlr_loss_timeout_sec": 0, 00:25:08.898 "reconnect_delay_sec": 0, 00:25:08.898 "fast_io_fail_timeout_sec": 0, 00:25:08.898 "psk": "key0", 00:25:08.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.898 "hdgst": false, 00:25:08.898 "ddgst": false 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_nvme_set_hotplug", 00:25:08.898 "params": { 00:25:08.898 "period_us": 100000, 00:25:08.898 "enable": false 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_enable_histogram", 00:25:08.898 "params": { 00:25:08.898 "name": "nvme0n1", 00:25:08.898 "enable": true 00:25:08.898 } 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "method": "bdev_wait_for_examine" 00:25:08.898 } 00:25:08.898 ] 00:25:08.898 }, 00:25:08.898 { 00:25:08.898 "subsystem": "nbd", 00:25:08.898 "config": [] 00:25:08.898 } 00:25:08.898 ] 00:25:08.898 }' 00:25:09.159 [2024-07-12 01:43:35.282652] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:09.159 [2024-07-12 01:43:35.282721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051785 ] 00:25:09.159 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.159 [2024-07-12 01:43:35.363669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.159 [2024-07-12 01:43:35.392340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.420 [2024-07-12 01:43:35.521049] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.680 01:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:09.680 01:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:09.680 01:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.680 01:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:09.941 01:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.941 01:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.941 Running I/O for 1 seconds... 00:25:11.324 00:25:11.324 Latency(us) 00:25:11.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.324 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:11.324 Verification LBA range: start 0x0 length 0x2000 00:25:11.324 nvme0n1 : 1.02 4100.64 16.02 0.00 0.00 30869.31 5925.55 83012.27 00:25:11.324 =================================================================================================================== 00:25:11.324 Total : 4100.64 16.02 0.00 0.00 30869.31 5925.55 83012.27 00:25:11.324 0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:11.324 nvmf_trace.0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4051785 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4051785 ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4051785 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4051785 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4051785' 00:25:11.324 killing process with pid 4051785 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4051785 00:25:11.324 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.324 00:25:11.324 Latency(us) 00:25:11.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.324 =================================================================================================================== 00:25:11.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4051785 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.324 rmmod nvme_tcp 00:25:11.324 rmmod nvme_fabrics 00:25:11.324 rmmod nvme_keyring 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4051749 ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4051749 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4051749 ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4051749 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.324 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4051749 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4051749' 00:25:11.585 killing process with pid 4051749 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4051749 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4051749 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.585 01:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.587 01:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.587 01:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pYpTKAcbrx /tmp/tmp.H6UjwbIqEu /tmp/tmp.0XcKU4U01A 00:25:13.587 00:25:13.587 real 1m19.088s 00:25:13.587 user 1m55.045s 00:25:13.587 sys 0m28.118s 00:25:13.587 01:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.587 01:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.587 ************************************ 00:25:13.587 END TEST nvmf_tls 00:25:13.587 ************************************ 00:25:13.849 01:43:39 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:13.849 01:43:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:13.849 01:43:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.849 01:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.849 ************************************ 00:25:13.849 START TEST nvmf_fips 00:25:13.849 ************************************ 00:25:13.849 01:43:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:13.849 * Looking for test storage... 00:25:13.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.849 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:13.850 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:14.112 Error setting digest 00:25:14.112 0072118DA17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:14.112 0072118DA17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.112 01:43:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.260 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:22.261 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:22.261 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:22.261 Found net devices under 0000:31:00.0: cvl_0_0 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:22.261 Found net devices under 0000:31:00.1: cvl_0_1 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:25:22.261 00:25:22.261 --- 10.0.0.2 ping statistics --- 00:25:22.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.261 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:22.261 00:25:22.261 --- 10.0.0.1 ping statistics --- 00:25:22.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.261 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4057091 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4057091 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4057091 ']' 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:22.261 01:43:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.523 [2024-07-12 01:43:48.687241] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:22.523 [2024-07-12 01:43:48.687312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.523 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.523 [2024-07-12 01:43:48.783299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.523 [2024-07-12 01:43:48.828408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.523 [2024-07-12 01:43:48.828464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.523 [2024-07-12 01:43:48.828472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.523 [2024-07-12 01:43:48.828480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.523 [2024-07-12 01:43:48.828485] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.523 [2024-07-12 01:43:48.828511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:23.467 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.468 [2024-07-12 01:43:49.650190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.468 [2024-07-12 01:43:49.666193] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:23.468 [2024-07-12 01:43:49.666435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.468 [2024-07-12 01:43:49.696333] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:23.468 malloc0 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4057198 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4057198 /var/tmp/bdevperf.sock 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4057198 ']' 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:23.468 01:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:23.468 [2024-07-12 01:43:49.806833] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:23.468 [2024-07-12 01:43:49.806905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057198 ] 00:25:23.730 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.730 [2024-07-12 01:43:49.868544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.730 [2024-07-12 01:43:49.905576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.302 01:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:24.302 01:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:25:24.302 01:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:24.563 [2024-07-12 01:43:50.685887] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:24.563 [2024-07-12 01:43:50.685949] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:24.563 TLSTESTn1 00:25:24.563 01:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.563 Running I/O for 10 seconds... 00:25:34.568 00:25:34.568 Latency(us) 00:25:34.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:34.568 Verification LBA range: start 0x0 length 0x2000 00:25:34.568 TLSTESTn1 : 10.01 5823.00 22.75 0.00 0.00 21941.96 4505.60 69031.25 00:25:34.568 =================================================================================================================== 00:25:34.568 Total : 5823.00 22.75 0.00 0.00 21941.96 4505.60 69031.25 00:25:34.568 0 00:25:34.568 01:44:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:34.568 01:44:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:34.568 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:25:34.568 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:25:34.568 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:25:34.568 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:34.847 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:25:34.847 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:25:34.847 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:25:34.847 01:44:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:34.847 nvmf_trace.0 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4057198 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4057198 ']' 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4057198 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4057198 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4057198' 00:25:34.847 killing process with pid 4057198 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4057198 00:25:34.847 Received shutdown signal, test time was about 10.000000 seconds 00:25:34.847 00:25:34.847 Latency(us) 00:25:34.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.847 =================================================================================================================== 00:25:34.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.847 [2024-07-12 01:44:01.077718] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4057198 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.847 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.847 rmmod nvme_tcp 00:25:35.108 rmmod nvme_fabrics 00:25:35.109 rmmod nvme_keyring 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4057091 ']' 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4057091 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4057091 ']' 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4057091 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4057091 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4057091' 00:25:35.109 killing process with pid 4057091 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4057091 00:25:35.109 [2024-07-12 01:44:01.314961] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4057091 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.109 01:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.649 01:44:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.649 01:44:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:37.649 00:25:37.649 real 0m23.515s 00:25:37.649 user 0m19.605s 00:25:37.649 sys 0m11.193s 00:25:37.649 01:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:37.649 01:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:37.649 ************************************ 00:25:37.649 END TEST nvmf_fips 00:25:37.649 ************************************ 00:25:37.649 01:44:03 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:37.649 01:44:03 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:37.649 01:44:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:37.649 01:44:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:37.649 01:44:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.650 ************************************ 00:25:37.650 START TEST nvmf_fuzz 00:25:37.650 ************************************ 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:37.650 * Looking for test storage... 00:25:37.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.650 01:44:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:45.788 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:45.788 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:45.788 Found net devices under 0000:31:00.0: cvl_0_0 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:45.788 Found net devices under 0000:31:00.1: cvl_0_1 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.788 01:44:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:25:45.788 00:25:45.788 --- 10.0.0.2 ping statistics --- 00:25:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.788 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:25:45.788 00:25:45.788 --- 10.0.0.1 ping statistics --- 00:25:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.788 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4064203 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4064203 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 4064203 ']' 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:45.788 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.048 Malloc0 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.048 01:44:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.306 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:46.306 01:44:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:18.443 Fuzzing completed. Shutting down the fuzz application 00:26:18.443 00:26:18.443 Dumping successful admin opcodes: 00:26:18.443 8, 9, 10, 24, 00:26:18.443 Dumping successful io opcodes: 00:26:18.443 0, 9, 00:26:18.443 NS: 0x200003aeff00 I/O qp, Total commands completed: 912021, total successful commands: 5307, random_seed: 1580696960 00:26:18.443 NS: 0x200003aeff00 admin qp, Total commands completed: 115199, total successful commands: 940, random_seed: 1795643520 00:26:18.443 01:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:18.443 Fuzzing completed. Shutting down the fuzz application 00:26:18.443 00:26:18.443 Dumping successful admin opcodes: 00:26:18.443 24, 00:26:18.443 Dumping successful io opcodes: 00:26:18.443 00:26:18.443 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1678400077 00:26:18.443 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1678476139 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.443 01:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.443 rmmod nvme_tcp 00:26:18.443 rmmod nvme_fabrics 00:26:18.443 rmmod nvme_keyring 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4064203 ']' 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4064203 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 4064203 ']' 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 4064203 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4064203 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4064203' 00:26:18.443 killing process with pid 4064203 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 4064203 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 4064203 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:18.443 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:18.444 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:18.444 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.444 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.444 01:44:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.444 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.444 01:44:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.358 01:44:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:20.358 01:44:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:20.358 00:26:20.358 real 0m42.728s 00:26:20.358 user 0m55.389s 00:26:20.358 sys 0m16.233s 00:26:20.358 01:44:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:20.358 01:44:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:20.358 ************************************ 00:26:20.358 END TEST nvmf_fuzz 00:26:20.358 ************************************ 00:26:20.358 01:44:46 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:20.358 01:44:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:20.358 01:44:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:20.358 01:44:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.358 ************************************ 00:26:20.358 START TEST nvmf_multiconnection 00:26:20.358 ************************************ 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:20.358 * Looking for test storage... 00:26:20.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.358 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.359 01:44:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.501 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.501 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.502 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.502 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.502 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.502 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:26:28.762 00:26:28.762 --- 10.0.0.2 ping statistics --- 00:26:28.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.762 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:26:28.762 00:26:28.762 --- 10.0.0.1 ping statistics --- 00:26:28.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.762 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4074888 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4074888 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 4074888 ']' 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.762 01:44:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.762 [2024-07-12 01:44:54.995581] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:28.762 [2024-07-12 01:44:54.995664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.762 [2024-07-12 01:44:55.078602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.022 [2024-07-12 01:44:55.119397] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.022 [2024-07-12 01:44:55.119442] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.022 [2024-07-12 01:44:55.119450] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.022 [2024-07-12 01:44:55.119457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.022 [2024-07-12 01:44:55.119463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.022 [2024-07-12 01:44:55.119604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.022 [2024-07-12 01:44:55.119724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.022 [2024-07-12 01:44:55.119880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.022 [2024-07-12 01:44:55.119882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.592 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:29.592 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:26:29.592 01:44:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.592 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.592 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 [2024-07-12 01:44:55.820895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 Malloc1 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 [2024-07-12 01:44:55.888296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 Malloc2 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.593 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.854 Malloc3 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.854 01:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:29.855 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 Malloc4 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 Malloc5 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 Malloc6 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 Malloc7 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.855 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 Malloc8 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 Malloc9 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 Malloc10 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.116 Malloc11 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.116 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.117 01:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:32.092 01:44:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:32.092 01:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:32.092 01:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.092 01:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:32.092 01:44:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.023 01:44:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:35.409 01:45:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:35.409 01:45:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:35.409 01:45:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.409 01:45:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:35.409 01:45:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.325 01:45:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:39.239 01:45:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:39.239 01:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:39.239 01:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:39.239 01:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:39.239 01:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.154 01:45:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:42.540 01:45:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:42.540 01:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:42.540 01:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.540 01:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:42.540 01:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.450 01:45:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:46.364 01:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:46.364 01:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:46.364 01:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.364 01:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:46.364 01:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.272 01:45:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:50.186 01:45:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:50.186 01:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:50.186 01:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:50.186 01:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:50.186 01:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.097 01:45:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:53.479 01:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:53.479 01:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:53.479 01:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.479 01:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:53.479 01:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:56.024 01:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:56.024 01:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:56.024 01:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:26:56.025 01:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:56.025 01:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:56.025 01:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:56.025 01:45:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.025 01:45:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:57.409 01:45:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:57.409 01:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:57.409 01:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.409 01:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:57.409 01:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.321 01:45:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:01.233 01:45:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:01.233 01:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:01.233 01:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.233 01:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:01.233 01:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.328 01:45:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:05.234 01:45:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:05.234 01:45:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:05.234 01:45:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:05.234 01:45:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:05.234 01:45:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.142 01:45:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:09.049 01:45:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:09.049 01:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:09.049 01:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:09.049 01:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:09.049 01:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:10.956 01:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:10.956 [global] 00:27:10.956 thread=1 00:27:10.956 invalidate=1 00:27:10.956 rw=read 00:27:10.956 time_based=1 00:27:10.956 runtime=10 00:27:10.956 ioengine=libaio 00:27:10.956 direct=1 00:27:10.956 bs=262144 00:27:10.956 iodepth=64 00:27:10.956 norandommap=1 00:27:10.956 numjobs=1 00:27:10.956 00:27:10.956 [job0] 00:27:10.956 filename=/dev/nvme0n1 00:27:10.956 [job1] 00:27:10.956 filename=/dev/nvme10n1 00:27:10.956 [job2] 00:27:10.956 filename=/dev/nvme1n1 00:27:10.956 [job3] 00:27:10.956 filename=/dev/nvme2n1 00:27:10.956 [job4] 00:27:10.956 filename=/dev/nvme3n1 00:27:10.956 [job5] 00:27:10.956 filename=/dev/nvme4n1 00:27:10.956 [job6] 00:27:10.956 filename=/dev/nvme5n1 00:27:10.956 [job7] 00:27:10.956 filename=/dev/nvme6n1 00:27:10.956 [job8] 00:27:10.956 filename=/dev/nvme7n1 00:27:10.956 [job9] 00:27:10.956 filename=/dev/nvme8n1 00:27:10.956 [job10] 00:27:10.956 filename=/dev/nvme9n1 00:27:10.956 Could not set queue depth (nvme0n1) 00:27:10.956 Could not set queue depth (nvme10n1) 00:27:10.956 Could not set queue depth (nvme1n1) 00:27:10.956 Could not set queue depth (nvme2n1) 00:27:10.956 Could not set queue depth (nvme3n1) 00:27:10.956 Could not set queue depth (nvme4n1) 00:27:10.956 Could not set queue depth (nvme5n1) 00:27:10.956 Could not set queue depth (nvme6n1) 00:27:10.956 Could not set queue depth (nvme7n1) 00:27:10.956 Could not set queue depth (nvme8n1) 00:27:10.956 Could not set queue depth (nvme9n1) 00:27:11.534 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:11.534 fio-3.35 00:27:11.534 Starting 11 threads 00:27:23.769 00:27:23.769 job0: (groupid=0, jobs=1): err= 0: pid=4083886: Fri Jul 12 01:45:48 2024 00:27:23.769 read: IOPS=658, BW=165MiB/s (173MB/s)(1664MiB/10099msec) 00:27:23.769 slat (usec): min=5, max=148077, avg=1236.36, stdev=5452.68 00:27:23.769 clat (usec): min=1389, max=236794, avg=95760.69, stdev=58639.61 00:27:23.769 lat (usec): min=1435, max=290011, avg=96997.04, stdev=59589.19 00:27:23.769 clat percentiles (msec): 00:27:23.769 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 28], 00:27:23.769 | 30.00th=[ 45], 40.00th=[ 84], 50.00th=[ 107], 60.00th=[ 126], 00:27:23.769 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 176], 00:27:23.769 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 236], 99.95th=[ 236], 00:27:23.769 | 99.99th=[ 236] 00:27:23.769 bw ( KiB/s): min=96768, max=385536, per=7.45%, avg=168734.10, stdev=81598.16, samples=20 00:27:23.769 iops : min= 378, max= 1506, avg=659.10, stdev=318.73, samples=20 00:27:23.769 lat (msec) : 2=0.12%, 4=1.25%, 10=4.24%, 20=8.34%, 50=19.67% 00:27:23.769 lat (msec) : 100=13.40%, 250=52.98% 00:27:23.769 cpu : usr=0.30%, sys=2.28%, ctx=1680, majf=0, minf=4097 00:27:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.769 issued rwts: total=6655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.769 job1: (groupid=0, jobs=1): err= 0: pid=4083890: Fri Jul 12 01:45:48 2024 00:27:23.769 read: IOPS=793, BW=198MiB/s (208MB/s)(2004MiB/10105msec) 00:27:23.769 slat (usec): min=5, max=164751, avg=1099.24, stdev=4330.26 00:27:23.769 clat (usec): min=1796, max=258214, avg=79475.22, stdev=55294.15 00:27:23.769 lat (usec): min=1842, max=308374, avg=80574.46, stdev=56127.39 00:27:23.769 clat percentiles (msec): 00:27:23.769 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 18], 20.00th=[ 29], 00:27:23.769 | 30.00th=[ 40], 40.00th=[ 48], 50.00th=[ 59], 60.00th=[ 79], 00:27:23.769 | 70.00th=[ 122], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 171], 00:27:23.769 | 99.00th=[ 213], 99.50th=[ 236], 99.90th=[ 251], 99.95th=[ 251], 00:27:23.769 | 99.99th=[ 259] 00:27:23.769 bw ( KiB/s): min=98816, max=465408, per=8.99%, avg=203609.15, stdev=120996.20, samples=20 00:27:23.769 iops : min= 386, max= 1818, avg=795.30, stdev=472.67, samples=20 00:27:23.769 lat (msec) : 2=0.02%, 4=0.41%, 10=2.64%, 20=9.50%, 50=29.97% 00:27:23.769 lat (msec) : 100=20.71%, 250=36.66%, 500=0.07% 00:27:23.769 cpu : usr=0.33%, sys=2.54%, ctx=1835, majf=0, minf=4097 00:27:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.769 issued rwts: total=8017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.769 job2: (groupid=0, jobs=1): err= 0: pid=4083902: Fri Jul 12 01:45:48 2024 00:27:23.769 read: IOPS=688, BW=172MiB/s (181MB/s)(1739MiB/10096msec) 00:27:23.769 slat (usec): min=5, max=134529, avg=1152.30, stdev=5801.66 00:27:23.769 clat (usec): min=1456, max=275585, avg=91684.41, stdev=55052.55 00:27:23.769 lat (usec): min=1502, max=296506, avg=92836.70, stdev=55934.28 00:27:23.769 clat percentiles (msec): 00:27:23.769 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 21], 20.00th=[ 34], 00:27:23.769 | 30.00th=[ 45], 40.00th=[ 65], 50.00th=[ 97], 60.00th=[ 121], 00:27:23.769 | 70.00th=[ 138], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 165], 00:27:23.769 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 271], 99.95th=[ 271], 00:27:23.769 | 99.99th=[ 275] 00:27:23.769 bw ( KiB/s): min=95232, max=422400, per=7.79%, avg=176377.00, stdev=88552.21, samples=20 00:27:23.769 iops : min= 372, max= 1650, avg=688.90, stdev=345.94, samples=20 00:27:23.769 lat (msec) : 2=0.04%, 4=2.30%, 10=3.39%, 20=4.23%, 50=24.62% 00:27:23.769 lat (msec) : 100=16.82%, 250=48.39%, 500=0.20% 00:27:23.769 cpu : usr=0.27%, sys=2.07%, ctx=1693, majf=0, minf=4097 00:27:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.769 issued rwts: total=6954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.769 job3: (groupid=0, jobs=1): err= 0: pid=4083909: Fri Jul 12 01:45:48 2024 00:27:23.769 read: IOPS=480, BW=120MiB/s (126MB/s)(1214MiB/10102msec) 00:27:23.769 slat (usec): min=7, max=77180, avg=1997.88, stdev=6148.86 00:27:23.769 clat (msec): min=12, max=236, avg=131.02, stdev=34.12 00:27:23.769 lat (msec): min=13, max=244, avg=133.02, stdev=34.96 00:27:23.769 clat percentiles (msec): 00:27:23.769 | 1.00th=[ 30], 5.00th=[ 65], 10.00th=[ 87], 20.00th=[ 105], 00:27:23.769 | 30.00th=[ 121], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 144], 00:27:23.769 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 178], 00:27:23.769 | 99.00th=[ 201], 99.50th=[ 213], 99.90th=[ 236], 99.95th=[ 236], 00:27:23.769 | 99.99th=[ 236] 00:27:23.769 bw ( KiB/s): min=96256, max=230962, per=5.42%, avg=122626.50, stdev=31362.78, samples=20 00:27:23.769 iops : min= 376, max= 902, avg=479.00, stdev=122.48, samples=20 00:27:23.769 lat (msec) : 20=0.49%, 50=2.16%, 100=14.09%, 250=83.25% 00:27:23.769 cpu : usr=0.13%, sys=1.86%, ctx=1096, majf=0, minf=4097 00:27:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.769 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.769 job4: (groupid=0, jobs=1): err= 0: pid=4083913: Fri Jul 12 01:45:48 2024 00:27:23.769 read: IOPS=1060, BW=265MiB/s (278MB/s)(2666MiB/10058msec) 00:27:23.769 slat (usec): min=5, max=98425, avg=786.66, stdev=2998.94 00:27:23.769 clat (msec): min=2, max=205, avg=59.50, stdev=38.66 00:27:23.769 lat (msec): min=2, max=208, avg=60.29, stdev=39.14 00:27:23.769 clat percentiles (msec): 00:27:23.769 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 26], 20.00th=[ 30], 00:27:23.769 | 30.00th=[ 35], 40.00th=[ 42], 50.00th=[ 47], 60.00th=[ 55], 00:27:23.769 | 70.00th=[ 65], 80.00th=[ 86], 90.00th=[ 118], 95.00th=[ 150], 00:27:23.769 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 199], 99.95th=[ 201], 00:27:23.769 | 99.99th=[ 205] 00:27:23.770 bw ( KiB/s): min=90112, max=526848, per=11.99%, avg=271396.60, stdev=124153.62, samples=20 00:27:23.770 iops : min= 352, max= 2058, avg=1060.05, stdev=485.04, samples=20 00:27:23.770 lat (msec) : 4=0.09%, 10=1.43%, 20=3.60%, 50=50.35%, 100=28.43% 00:27:23.770 lat (msec) : 250=16.10% 00:27:23.770 cpu : usr=0.35%, sys=2.90%, ctx=2390, majf=0, minf=4097 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=10665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 job5: (groupid=0, jobs=1): err= 0: pid=4083924: Fri Jul 12 01:45:48 2024 00:27:23.770 read: IOPS=659, BW=165MiB/s (173MB/s)(1668MiB/10116msec) 00:27:23.770 slat (usec): min=5, max=122392, avg=1249.50, stdev=5021.55 00:27:23.770 clat (msec): min=3, max=261, avg=95.67, stdev=47.02 00:27:23.770 lat (msec): min=3, max=262, avg=96.92, stdev=47.77 00:27:23.770 clat percentiles (msec): 00:27:23.770 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 43], 20.00th=[ 53], 00:27:23.770 | 30.00th=[ 60], 40.00th=[ 72], 50.00th=[ 87], 60.00th=[ 110], 00:27:23.770 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 169], 00:27:23.770 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 239], 99.95th=[ 245], 00:27:23.770 | 99.99th=[ 262] 00:27:23.770 bw ( KiB/s): min=94208, max=318464, per=7.47%, avg=169147.25, stdev=69853.76, samples=20 00:27:23.770 iops : min= 368, max= 1244, avg=660.70, stdev=272.86, samples=20 00:27:23.770 lat (msec) : 4=0.25%, 10=1.69%, 20=0.91%, 50=13.61%, 100=40.08% 00:27:23.770 lat (msec) : 250=43.42%, 500=0.03% 00:27:23.770 cpu : usr=0.18%, sys=2.16%, ctx=1608, majf=0, minf=3534 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 job6: (groupid=0, jobs=1): err= 0: pid=4083932: Fri Jul 12 01:45:48 2024 00:27:23.770 read: IOPS=810, BW=203MiB/s (212MB/s)(2047MiB/10104msec) 00:27:23.770 slat (usec): min=5, max=50224, avg=1171.09, stdev=3637.33 00:27:23.770 clat (msec): min=5, max=237, avg=77.70, stdev=45.89 00:27:23.770 lat (msec): min=5, max=237, avg=78.87, stdev=46.63 00:27:23.770 clat percentiles (msec): 00:27:23.770 | 1.00th=[ 16], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 41], 00:27:23.770 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 67], 00:27:23.770 | 70.00th=[ 102], 80.00th=[ 127], 90.00th=[ 153], 95.00th=[ 165], 00:27:23.770 | 99.00th=[ 182], 99.50th=[ 205], 99.90th=[ 228], 99.95th=[ 228], 00:27:23.770 | 99.99th=[ 239] 00:27:23.770 bw ( KiB/s): min=94208, max=465408, per=9.19%, avg=207976.85, stdev=111355.74, samples=20 00:27:23.770 iops : min= 368, max= 1818, avg=812.40, stdev=434.98, samples=20 00:27:23.770 lat (msec) : 10=0.29%, 20=2.28%, 50=32.79%, 100=34.22%, 250=30.41% 00:27:23.770 cpu : usr=0.32%, sys=2.72%, ctx=1733, majf=0, minf=4097 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=8188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 job7: (groupid=0, jobs=1): err= 0: pid=4083941: Fri Jul 12 01:45:48 2024 00:27:23.770 read: IOPS=1284, BW=321MiB/s (337MB/s)(3215MiB/10013msec) 00:27:23.770 slat (usec): min=5, max=127850, avg=577.02, stdev=3414.01 00:27:23.770 clat (usec): min=1358, max=260616, avg=49213.25, stdev=39115.01 00:27:23.770 lat (usec): min=1385, max=297698, avg=49790.27, stdev=39604.96 00:27:23.770 clat percentiles (msec): 00:27:23.770 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 24], 00:27:23.770 | 30.00th=[ 29], 40.00th=[ 33], 50.00th=[ 40], 60.00th=[ 45], 00:27:23.770 | 70.00th=[ 53], 80.00th=[ 65], 90.00th=[ 95], 95.00th=[ 150], 00:27:23.770 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 232], 99.95th=[ 239], 00:27:23.770 | 99.99th=[ 243] 00:27:23.770 bw ( KiB/s): min=168960, max=512000, per=14.47%, avg=327579.10, stdev=101142.41, samples=20 00:27:23.770 iops : min= 660, max= 2000, avg=1279.60, stdev=395.09, samples=20 00:27:23.770 lat (msec) : 2=0.30%, 4=1.34%, 10=4.77%, 20=8.20%, 50=52.90% 00:27:23.770 lat (msec) : 100=23.03%, 250=9.44%, 500=0.01% 00:27:23.770 cpu : usr=0.51%, sys=3.63%, ctx=2839, majf=0, minf=4097 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=12860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 job8: (groupid=0, jobs=1): err= 0: pid=4083968: Fri Jul 12 01:45:48 2024 00:27:23.770 read: IOPS=721, BW=180MiB/s (189MB/s)(1824MiB/10113msec) 00:27:23.770 slat (usec): min=5, max=114521, avg=1263.09, stdev=5036.16 00:27:23.770 clat (msec): min=3, max=262, avg=87.34, stdev=51.89 00:27:23.770 lat (msec): min=3, max=262, avg=88.60, stdev=52.77 00:27:23.770 clat percentiles (msec): 00:27:23.770 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 38], 00:27:23.770 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 72], 60.00th=[ 95], 00:27:23.770 | 70.00th=[ 127], 80.00th=[ 148], 90.00th=[ 161], 95.00th=[ 169], 00:27:23.770 | 99.00th=[ 186], 99.50th=[ 201], 99.90th=[ 232], 99.95th=[ 251], 00:27:23.770 | 99.99th=[ 264] 00:27:23.770 bw ( KiB/s): min=96256, max=536064, per=8.18%, avg=185118.80, stdev=114998.06, samples=20 00:27:23.770 iops : min= 376, max= 2094, avg=723.10, stdev=449.21, samples=20 00:27:23.770 lat (msec) : 4=0.22%, 10=2.07%, 20=2.96%, 50=23.46%, 100=32.70% 00:27:23.770 lat (msec) : 250=38.53%, 500=0.05% 00:27:23.770 cpu : usr=0.23%, sys=2.04%, ctx=1682, majf=0, minf=4097 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=7296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 job9: (groupid=0, jobs=1): err= 0: pid=4083980: Fri Jul 12 01:45:48 2024 00:27:23.770 read: IOPS=779, BW=195MiB/s (204MB/s)(1953MiB/10018msec) 00:27:23.770 slat (usec): min=7, max=54431, avg=1250.92, stdev=3640.26 00:27:23.770 clat (msec): min=15, max=197, avg=80.72, stdev=41.18 00:27:23.770 lat (msec): min=16, max=209, avg=81.97, stdev=41.82 00:27:23.770 clat percentiles (msec): 00:27:23.770 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 45], 00:27:23.770 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 82], 00:27:23.770 | 70.00th=[ 99], 80.00th=[ 124], 90.00th=[ 148], 95.00th=[ 159], 00:27:23.770 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 192], 00:27:23.770 | 99.99th=[ 197] 00:27:23.770 bw ( KiB/s): min=97792, max=478208, per=8.76%, avg=198397.10, stdev=102886.94, samples=20 00:27:23.770 iops : min= 382, max= 1868, avg=774.90, stdev=401.96, samples=20 00:27:23.770 lat (msec) : 20=0.04%, 50=27.28%, 100=43.95%, 250=28.73% 00:27:23.770 cpu : usr=0.30%, sys=2.64%, ctx=1579, majf=0, minf=4097 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=7813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 job10: (groupid=0, jobs=1): err= 0: pid=4083990: Fri Jul 12 01:45:48 2024 00:27:23.770 read: IOPS=943, BW=236MiB/s (247MB/s)(2371MiB/10046msec) 00:27:23.770 slat (usec): min=5, max=140819, avg=843.79, stdev=4511.01 00:27:23.770 clat (usec): min=1520, max=266697, avg=66906.16, stdev=45383.18 00:27:23.770 lat (usec): min=1554, max=311839, avg=67749.95, stdev=45983.48 00:27:23.770 clat percentiles (msec): 00:27:23.770 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 18], 20.00th=[ 32], 00:27:23.770 | 30.00th=[ 39], 40.00th=[ 47], 50.00th=[ 58], 60.00th=[ 66], 00:27:23.770 | 70.00th=[ 82], 80.00th=[ 104], 90.00th=[ 136], 95.00th=[ 157], 00:27:23.770 | 99.00th=[ 197], 99.50th=[ 213], 99.90th=[ 266], 99.95th=[ 268], 00:27:23.770 | 99.99th=[ 268] 00:27:23.770 bw ( KiB/s): min=108032, max=444928, per=10.65%, avg=241126.40, stdev=96853.93, samples=20 00:27:23.770 iops : min= 422, max= 1738, avg=941.90, stdev=378.34, samples=20 00:27:23.770 lat (msec) : 2=0.25%, 4=1.62%, 10=4.14%, 20=5.30%, 50=32.30% 00:27:23.770 lat (msec) : 100=35.17%, 250=20.79%, 500=0.41% 00:27:23.770 cpu : usr=0.36%, sys=2.98%, ctx=2142, majf=0, minf=4097 00:27:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:23.770 issued rwts: total=9482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:23.770 00:27:23.770 Run status group 0 (all jobs): 00:27:23.770 READ: bw=2211MiB/s (2318MB/s), 120MiB/s-321MiB/s (126MB/s-337MB/s), io=21.8GiB (23.5GB), run=10013-10116msec 00:27:23.770 00:27:23.770 Disk stats (read/write): 00:27:23.770 nvme0n1: ios=13063/0, merge=0/0, ticks=1212923/0, in_queue=1212923, util=96.44% 00:27:23.770 nvme10n1: ios=16033/0, merge=0/0, ticks=1249822/0, in_queue=1249822, util=96.79% 00:27:23.770 nvme1n1: ios=13524/0, merge=0/0, ticks=1223408/0, in_queue=1223408, util=97.05% 00:27:23.770 nvme2n1: ios=9452/0, merge=0/0, ticks=1207772/0, in_queue=1207772, util=97.28% 00:27:23.770 nvme3n1: ios=20944/0, merge=0/0, ticks=1225097/0, in_queue=1225097, util=97.39% 00:27:23.770 nvme4n1: ios=13264/0, merge=0/0, ticks=1246252/0, in_queue=1246252, util=97.94% 00:27:23.770 nvme5n1: ios=16127/0, merge=0/0, ticks=1209044/0, in_queue=1209044, util=98.08% 00:27:23.770 nvme6n1: ios=25141/0, merge=0/0, ticks=1228418/0, in_queue=1228418, util=98.25% 00:27:23.770 nvme7n1: ios=14522/0, merge=0/0, ticks=1242090/0, in_queue=1242090, util=98.87% 00:27:23.770 nvme8n1: ios=15104/0, merge=0/0, ticks=1218358/0, in_queue=1218358, util=99.06% 00:27:23.770 nvme9n1: ios=18533/0, merge=0/0, ticks=1226646/0, in_queue=1226646, util=99.20% 00:27:23.770 01:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:23.770 [global] 00:27:23.770 thread=1 00:27:23.770 invalidate=1 00:27:23.770 rw=randwrite 00:27:23.770 time_based=1 00:27:23.770 runtime=10 00:27:23.770 ioengine=libaio 00:27:23.770 direct=1 00:27:23.770 bs=262144 00:27:23.770 iodepth=64 00:27:23.770 norandommap=1 00:27:23.770 numjobs=1 00:27:23.770 00:27:23.770 [job0] 00:27:23.770 filename=/dev/nvme0n1 00:27:23.770 [job1] 00:27:23.770 filename=/dev/nvme10n1 00:27:23.770 [job2] 00:27:23.770 filename=/dev/nvme1n1 00:27:23.770 [job3] 00:27:23.770 filename=/dev/nvme2n1 00:27:23.770 [job4] 00:27:23.770 filename=/dev/nvme3n1 00:27:23.770 [job5] 00:27:23.770 filename=/dev/nvme4n1 00:27:23.770 [job6] 00:27:23.770 filename=/dev/nvme5n1 00:27:23.770 [job7] 00:27:23.770 filename=/dev/nvme6n1 00:27:23.770 [job8] 00:27:23.770 filename=/dev/nvme7n1 00:27:23.770 [job9] 00:27:23.770 filename=/dev/nvme8n1 00:27:23.770 [job10] 00:27:23.770 filename=/dev/nvme9n1 00:27:23.770 Could not set queue depth (nvme0n1) 00:27:23.770 Could not set queue depth (nvme10n1) 00:27:23.770 Could not set queue depth (nvme1n1) 00:27:23.770 Could not set queue depth (nvme2n1) 00:27:23.770 Could not set queue depth (nvme3n1) 00:27:23.770 Could not set queue depth (nvme4n1) 00:27:23.770 Could not set queue depth (nvme5n1) 00:27:23.770 Could not set queue depth (nvme6n1) 00:27:23.770 Could not set queue depth (nvme7n1) 00:27:23.770 Could not set queue depth (nvme8n1) 00:27:23.770 Could not set queue depth (nvme9n1) 00:27:23.770 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:23.770 fio-3.35 00:27:23.770 Starting 11 threads 00:27:33.814 00:27:33.814 job0: (groupid=0, jobs=1): err= 0: pid=4086302: Fri Jul 12 01:45:59 2024 00:27:33.814 write: IOPS=620, BW=155MiB/s (163MB/s)(1568MiB/10103msec); 0 zone resets 00:27:33.814 slat (usec): min=16, max=25324, avg=1508.86, stdev=2755.39 00:27:33.814 clat (msec): min=6, max=205, avg=101.57, stdev=18.17 00:27:33.814 lat (msec): min=6, max=205, avg=103.08, stdev=18.34 00:27:33.814 clat percentiles (msec): 00:27:33.814 | 1.00th=[ 29], 5.00th=[ 66], 10.00th=[ 75], 20.00th=[ 100], 00:27:33.814 | 30.00th=[ 101], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 107], 00:27:33.814 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 114], 95.00th=[ 123], 00:27:33.814 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 192], 99.95th=[ 199], 00:27:33.814 | 99.99th=[ 207] 00:27:33.814 bw ( KiB/s): min=135168, max=197120, per=9.07%, avg=158924.80, stdev=16465.82, samples=20 00:27:33.814 iops : min= 528, max= 770, avg=620.80, stdev=64.32, samples=20 00:27:33.814 lat (msec) : 10=0.05%, 20=0.46%, 50=1.56%, 100=24.61%, 250=73.32% 00:27:33.814 cpu : usr=1.29%, sys=1.79%, ctx=1966, majf=0, minf=1 00:27:33.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:33.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.814 issued rwts: total=0,6271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.814 job1: (groupid=0, jobs=1): err= 0: pid=4086314: Fri Jul 12 01:45:59 2024 00:27:33.814 write: IOPS=616, BW=154MiB/s (162MB/s)(1560MiB/10125msec); 0 zone resets 00:27:33.814 slat (usec): min=25, max=79940, avg=1483.48, stdev=3044.80 00:27:33.814 clat (msec): min=2, max=252, avg=102.36, stdev=29.90 00:27:33.815 lat (msec): min=3, max=252, avg=103.84, stdev=30.30 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 50], 20.00th=[ 100], 00:27:33.815 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 107], 00:27:33.815 | 70.00th=[ 109], 80.00th=[ 113], 90.00th=[ 134], 95.00th=[ 142], 00:27:33.815 | 99.00th=[ 157], 99.50th=[ 194], 99.90th=[ 236], 99.95th=[ 245], 00:27:33.815 | 99.99th=[ 253] 00:27:33.815 bw ( KiB/s): min=112640, max=252928, per=9.02%, avg=158080.00, stdev=35560.92, samples=20 00:27:33.815 iops : min= 440, max= 988, avg=617.50, stdev=138.91, samples=20 00:27:33.815 lat (msec) : 4=0.06%, 10=0.72%, 20=2.04%, 50=7.25%, 100=13.96% 00:27:33.815 lat (msec) : 250=75.94%, 500=0.03% 00:27:33.815 cpu : usr=1.52%, sys=1.66%, ctx=2159, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,6238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job2: (groupid=0, jobs=1): err= 0: pid=4086321: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=578, BW=145MiB/s (152MB/s)(1461MiB/10102msec); 0 zone resets 00:27:33.815 slat (usec): min=19, max=17960, avg=1601.46, stdev=3043.66 00:27:33.815 clat (msec): min=9, max=208, avg=109.02, stdev=30.30 00:27:33.815 lat (msec): min=10, max=208, avg=110.62, stdev=30.74 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 62], 20.00th=[ 96], 00:27:33.815 | 30.00th=[ 105], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 127], 00:27:33.815 | 70.00th=[ 132], 80.00th=[ 136], 90.00th=[ 136], 95.00th=[ 138], 00:27:33.815 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 203], 99.95th=[ 203], 00:27:33.815 | 99.99th=[ 209] 00:27:33.815 bw ( KiB/s): min=114688, max=242176, per=8.45%, avg=147968.00, stdev=36276.20, samples=20 00:27:33.815 iops : min= 448, max= 946, avg=578.00, stdev=141.70, samples=20 00:27:33.815 lat (msec) : 10=0.02%, 20=0.70%, 50=5.65%, 100=16.89%, 250=76.74% 00:27:33.815 cpu : usr=1.40%, sys=1.65%, ctx=2005, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,5843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job3: (groupid=0, jobs=1): err= 0: pid=4086323: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=757, BW=189MiB/s (199MB/s)(1906MiB/10062msec); 0 zone resets 00:27:33.815 slat (usec): min=14, max=271370, avg=1202.39, stdev=4163.59 00:27:33.815 clat (usec): min=1986, max=362063, avg=83237.95, stdev=34954.60 00:27:33.815 lat (msec): min=4, max=362, avg=84.44, stdev=35.30 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 62], 00:27:33.815 | 30.00th=[ 68], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 93], 00:27:33.815 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 125], 00:27:33.815 | 99.00th=[ 178], 99.50th=[ 317], 99.90th=[ 359], 99.95th=[ 363], 00:27:33.815 | 99.99th=[ 363] 00:27:33.815 bw ( KiB/s): min=110080, max=334336, per=11.05%, avg=193510.40, stdev=52382.40, samples=20 00:27:33.815 iops : min= 430, max= 1306, avg=755.90, stdev=204.62, samples=20 00:27:33.815 lat (msec) : 2=0.01%, 4=0.01%, 10=0.60%, 20=1.15%, 50=12.45% 00:27:33.815 lat (msec) : 100=72.36%, 250=12.58%, 500=0.83% 00:27:33.815 cpu : usr=1.72%, sys=1.98%, ctx=2511, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,7622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job4: (groupid=0, jobs=1): err= 0: pid=4086324: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=501, BW=125MiB/s (132MB/s)(1270MiB/10125msec); 0 zone resets 00:27:33.815 slat (usec): min=25, max=49996, avg=1938.95, stdev=3552.29 00:27:33.815 clat (msec): min=41, max=252, avg=125.51, stdev=17.02 00:27:33.815 lat (msec): min=41, max=252, avg=127.45, stdev=16.93 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 83], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 111], 00:27:33.815 | 30.00th=[ 114], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 134], 00:27:33.815 | 70.00th=[ 136], 80.00th=[ 136], 90.00th=[ 140], 95.00th=[ 144], 00:27:33.815 | 99.00th=[ 157], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 245], 00:27:33.815 | 99.99th=[ 253] 00:27:33.815 bw ( KiB/s): min=114688, max=147456, per=7.33%, avg=128460.80, stdev=12242.89, samples=20 00:27:33.815 iops : min= 448, max= 576, avg=501.80, stdev=47.82, samples=20 00:27:33.815 lat (msec) : 50=0.16%, 100=3.58%, 250=96.22%, 500=0.04% 00:27:33.815 cpu : usr=1.21%, sys=1.44%, ctx=1354, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,5081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job5: (groupid=0, jobs=1): err= 0: pid=4086325: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=643, BW=161MiB/s (169MB/s)(1625MiB/10100msec); 0 zone resets 00:27:33.815 slat (usec): min=21, max=29578, avg=1522.49, stdev=2695.57 00:27:33.815 clat (msec): min=7, max=208, avg=97.88, stdev=18.16 00:27:33.815 lat (msec): min=7, max=208, avg=99.40, stdev=18.25 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 41], 5.00th=[ 71], 10.00th=[ 86], 20.00th=[ 89], 00:27:33.815 | 30.00th=[ 93], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 95], 00:27:33.815 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 130], 00:27:33.815 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 197], 99.95th=[ 203], 00:27:33.815 | 99.99th=[ 209] 00:27:33.815 bw ( KiB/s): min=120832, max=211968, per=9.41%, avg=164812.80, stdev=21070.33, samples=20 00:27:33.815 iops : min= 472, max= 828, avg=643.80, stdev=82.31, samples=20 00:27:33.815 lat (msec) : 10=0.06%, 20=0.23%, 50=0.98%, 100=65.11%, 250=33.61% 00:27:33.815 cpu : usr=1.42%, sys=1.83%, ctx=1731, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,6501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job6: (groupid=0, jobs=1): err= 0: pid=4086326: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=616, BW=154MiB/s (162MB/s)(1561MiB/10126msec); 0 zone resets 00:27:33.815 slat (usec): min=23, max=80220, avg=1557.36, stdev=3090.26 00:27:33.815 clat (msec): min=11, max=250, avg=101.79, stdev=21.66 00:27:33.815 lat (msec): min=13, max=251, avg=103.34, stdev=21.79 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 35], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 90], 00:27:33.815 | 30.00th=[ 101], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 107], 00:27:33.815 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 125], 95.00th=[ 133], 00:27:33.815 | 99.00th=[ 163], 99.50th=[ 190], 99.90th=[ 243], 99.95th=[ 243], 00:27:33.815 | 99.99th=[ 251] 00:27:33.815 bw ( KiB/s): min=120832, max=229376, per=9.03%, avg=158233.60, stdev=27239.01, samples=20 00:27:33.815 iops : min= 472, max= 896, avg=618.10, stdev=106.40, samples=20 00:27:33.815 lat (msec) : 20=0.10%, 50=1.75%, 100=29.40%, 250=68.71%, 500=0.05% 00:27:33.815 cpu : usr=1.27%, sys=1.91%, ctx=1770, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,6244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job7: (groupid=0, jobs=1): err= 0: pid=4086327: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=588, BW=147MiB/s (154MB/s)(1481MiB/10063msec); 0 zone resets 00:27:33.815 slat (usec): min=22, max=22577, avg=1612.73, stdev=3038.60 00:27:33.815 clat (msec): min=3, max=155, avg=107.07, stdev=32.06 00:27:33.815 lat (msec): min=3, max=155, avg=108.68, stdev=32.51 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 17], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 65], 00:27:33.815 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 115], 60.00th=[ 128], 00:27:33.815 | 70.00th=[ 132], 80.00th=[ 136], 90.00th=[ 136], 95.00th=[ 138], 00:27:33.815 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:27:33.815 | 99.99th=[ 157] 00:27:33.815 bw ( KiB/s): min=118784, max=267776, per=8.56%, avg=150041.60, stdev=44201.34, samples=20 00:27:33.815 iops : min= 464, max= 1046, avg=586.10, stdev=172.66, samples=20 00:27:33.815 lat (msec) : 4=0.02%, 10=0.41%, 20=0.89%, 50=3.07%, 100=23.31% 00:27:33.815 lat (msec) : 250=72.30% 00:27:33.815 cpu : usr=1.31%, sys=1.75%, ctx=1831, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,5924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.815 job8: (groupid=0, jobs=1): err= 0: pid=4086328: Fri Jul 12 01:45:59 2024 00:27:33.815 write: IOPS=529, BW=132MiB/s (139MB/s)(1342MiB/10126msec); 0 zone resets 00:27:33.815 slat (usec): min=17, max=36121, avg=1705.91, stdev=3301.45 00:27:33.815 clat (msec): min=2, max=251, avg=119.02, stdev=27.72 00:27:33.815 lat (msec): min=3, max=251, avg=120.72, stdev=28.12 00:27:33.815 clat percentiles (msec): 00:27:33.815 | 1.00th=[ 30], 5.00th=[ 52], 10.00th=[ 87], 20.00th=[ 105], 00:27:33.815 | 30.00th=[ 111], 40.00th=[ 123], 50.00th=[ 128], 60.00th=[ 133], 00:27:33.815 | 70.00th=[ 136], 80.00th=[ 136], 90.00th=[ 140], 95.00th=[ 148], 00:27:33.815 | 99.00th=[ 163], 99.50th=[ 194], 99.90th=[ 243], 99.95th=[ 245], 00:27:33.815 | 99.99th=[ 253] 00:27:33.815 bw ( KiB/s): min=114688, max=184689, per=7.75%, avg=135775.25, stdev=22191.64, samples=20 00:27:33.815 iops : min= 448, max= 721, avg=530.35, stdev=86.63, samples=20 00:27:33.815 lat (msec) : 4=0.04%, 10=0.22%, 20=0.07%, 50=4.12%, 100=10.59% 00:27:33.815 lat (msec) : 250=84.92%, 500=0.04% 00:27:33.815 cpu : usr=1.13%, sys=1.60%, ctx=1906, majf=0, minf=1 00:27:33.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:33.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.815 issued rwts: total=0,5366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.816 job9: (groupid=0, jobs=1): err= 0: pid=4086329: Fri Jul 12 01:45:59 2024 00:27:33.816 write: IOPS=782, BW=196MiB/s (205MB/s)(1967MiB/10055msec); 0 zone resets 00:27:33.816 slat (usec): min=17, max=63404, avg=1178.04, stdev=2562.08 00:27:33.816 clat (msec): min=3, max=161, avg=80.56, stdev=25.72 00:27:33.816 lat (msec): min=3, max=161, avg=81.74, stdev=26.03 00:27:33.816 clat percentiles (msec): 00:27:33.816 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 56], 00:27:33.816 | 30.00th=[ 62], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 93], 00:27:33.816 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 121], 00:27:33.816 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 157], 00:27:33.816 | 99.99th=[ 163] 00:27:33.816 bw ( KiB/s): min=130560, max=299008, per=11.41%, avg=199829.40, stdev=53612.72, samples=20 00:27:33.816 iops : min= 510, max= 1168, avg=780.55, stdev=209.42, samples=20 00:27:33.816 lat (msec) : 4=0.03%, 10=1.00%, 20=1.36%, 50=5.92%, 100=76.73% 00:27:33.816 lat (msec) : 250=14.96% 00:27:33.816 cpu : usr=1.52%, sys=2.28%, ctx=2497, majf=0, minf=1 00:27:33.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:33.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.816 issued rwts: total=0,7868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.816 job10: (groupid=0, jobs=1): err= 0: pid=4086330: Fri Jul 12 01:45:59 2024 00:27:33.816 write: IOPS=627, BW=157MiB/s (164MB/s)(1584MiB/10103msec); 0 zone resets 00:27:33.816 slat (usec): min=22, max=11902, avg=1519.27, stdev=2704.82 00:27:33.816 clat (msec): min=8, max=206, avg=100.52, stdev=17.04 00:27:33.816 lat (msec): min=9, max=206, avg=102.03, stdev=17.17 00:27:33.816 clat percentiles (msec): 00:27:33.816 | 1.00th=[ 29], 5.00th=[ 71], 10.00th=[ 84], 20.00th=[ 94], 00:27:33.816 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 107], 00:27:33.816 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 111], 95.00th=[ 115], 00:27:33.816 | 99.00th=[ 130], 99.50th=[ 153], 99.90th=[ 194], 99.95th=[ 201], 00:27:33.816 | 99.99th=[ 207] 00:27:33.816 bw ( KiB/s): min=145408, max=225280, per=9.17%, avg=160563.20, stdev=18730.79, samples=20 00:27:33.816 iops : min= 568, max= 880, avg=627.20, stdev=73.17, samples=20 00:27:33.816 lat (msec) : 10=0.03%, 20=0.36%, 50=2.35%, 100=27.02%, 250=70.23% 00:27:33.816 cpu : usr=1.31%, sys=1.88%, ctx=1862, majf=0, minf=1 00:27:33.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:33.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:33.816 issued rwts: total=0,6335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:33.816 00:27:33.816 Run status group 0 (all jobs): 00:27:33.816 WRITE: bw=1711MiB/s (1794MB/s), 125MiB/s-196MiB/s (132MB/s-205MB/s), io=16.9GiB (18.2GB), run=10055-10126msec 00:27:33.816 00:27:33.816 Disk stats (read/write): 00:27:33.816 nvme0n1: ios=49/12523, merge=0/0, ticks=81/1230145, in_queue=1230226, util=96.89% 00:27:33.816 nvme10n1: ios=47/12438, merge=0/0, ticks=82/1229938, in_queue=1230020, util=97.17% 00:27:33.816 nvme1n1: ios=20/11673, merge=0/0, ticks=325/1231350, in_queue=1231675, util=97.76% 00:27:33.816 nvme2n1: ios=47/14810, merge=0/0, ticks=2955/1164796, in_queue=1167751, util=100.00% 00:27:33.816 nvme3n1: ios=49/10125, merge=0/0, ticks=2160/1223798, in_queue=1225958, util=99.88% 00:27:33.816 nvme4n1: ios=0/12990, merge=0/0, ticks=0/1228058, in_queue=1228058, util=97.78% 00:27:33.816 nvme5n1: ios=46/12449, merge=0/0, ticks=1909/1216341, in_queue=1218250, util=100.00% 00:27:33.816 nvme6n1: ios=0/11414, merge=0/0, ticks=0/1201925, in_queue=1201925, util=98.10% 00:27:33.816 nvme7n1: ios=0/10693, merge=0/0, ticks=0/1230411, in_queue=1230411, util=98.67% 00:27:33.816 nvme8n1: ios=43/15246, merge=0/0, ticks=1456/1195244, in_queue=1196700, util=99.88% 00:27:33.816 nvme9n1: ios=0/12651, merge=0/0, ticks=0/1229608, in_queue=1229608, util=99.10% 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:33.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:33.816 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.816 01:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:33.816 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.816 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:34.077 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.077 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.338 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.339 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.339 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:34.339 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:34.339 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:34.339 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:34.339 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:34.339 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:34.600 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.600 01:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:34.861 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.861 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:35.122 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.122 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:35.382 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:35.382 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:35.383 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.383 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:35.644 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.644 rmmod nvme_tcp 00:27:35.644 rmmod nvme_fabrics 00:27:35.644 rmmod nvme_keyring 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4074888 ']' 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4074888 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 4074888 ']' 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 4074888 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4074888 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4074888' 00:27:35.644 killing process with pid 4074888 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 4074888 00:27:35.644 01:46:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 4074888 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.905 01:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.462 01:46:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.462 00:27:38.462 real 1m17.904s 00:27:38.462 user 4m56.110s 00:27:38.462 sys 0m22.177s 00:27:38.462 01:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:38.462 01:46:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.462 ************************************ 00:27:38.462 END TEST nvmf_multiconnection 00:27:38.462 ************************************ 00:27:38.462 01:46:04 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:38.462 01:46:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:38.462 01:46:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:38.462 01:46:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.462 ************************************ 00:27:38.462 START TEST nvmf_initiator_timeout 00:27:38.462 ************************************ 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:38.462 * Looking for test storage... 00:27:38.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.462 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.463 01:46:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:46.606 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:46.606 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:46.606 Found net devices under 0000:31:00.0: cvl_0_0 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:46.606 Found net devices under 0000:31:00.1: cvl_0_1 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.606 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:27:46.607 00:27:46.607 --- 10.0.0.2 ping statistics --- 00:27:46.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.607 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:27:46.607 00:27:46.607 --- 10.0.0.1 ping statistics --- 00:27:46.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.607 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4093289 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4093289 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 4093289 ']' 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:46.607 01:46:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.607 [2024-07-12 01:46:12.790124] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:46.607 [2024-07-12 01:46:12.790206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.607 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.607 [2024-07-12 01:46:12.874491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.607 [2024-07-12 01:46:12.913800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.607 [2024-07-12 01:46:12.913844] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.607 [2024-07-12 01:46:12.913852] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.607 [2024-07-12 01:46:12.913859] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.607 [2024-07-12 01:46:12.913865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.607 [2024-07-12 01:46:12.914007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.607 [2024-07-12 01:46:12.914132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.607 [2024-07-12 01:46:12.914348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.607 [2024-07-12 01:46:12.914349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 Malloc0 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 Delay0 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 [2024-07-12 01:46:13.650981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.548 [2024-07-12 01:46:13.691252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.548 01:46:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:48.932 01:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:48.932 01:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:27:48.932 01:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:48.932 01:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:48.932 01:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4094135 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:51.481 01:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:51.481 [global] 00:27:51.481 thread=1 00:27:51.481 invalidate=1 00:27:51.481 rw=write 00:27:51.481 time_based=1 00:27:51.481 runtime=60 00:27:51.481 ioengine=libaio 00:27:51.481 direct=1 00:27:51.481 bs=4096 00:27:51.481 iodepth=1 00:27:51.481 norandommap=0 00:27:51.481 numjobs=1 00:27:51.481 00:27:51.481 verify_dump=1 00:27:51.481 verify_backlog=512 00:27:51.481 verify_state_save=0 00:27:51.481 do_verify=1 00:27:51.481 verify=crc32c-intel 00:27:51.481 [job0] 00:27:51.481 filename=/dev/nvme0n1 00:27:51.481 Could not set queue depth (nvme0n1) 00:27:51.481 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:51.481 fio-3.35 00:27:51.481 Starting 1 thread 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:54.029 true 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:54.029 true 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:54.029 true 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:54.029 true 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.029 01:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.331 true 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.331 true 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.331 true 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.331 true 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:57.331 01:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4094135 00:28:53.597 00:28:53.597 job0: (groupid=0, jobs=1): err= 0: pid=4094448: Fri Jul 12 01:47:17 2024 00:28:53.597 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(5408KiB/60009msec) 00:28:53.597 slat (usec): min=6, max=252, avg=25.51, stdev= 6.94 00:28:53.597 clat (usec): min=475, max=41875k, avg=43583.58, stdev=1138666.78 00:28:53.597 lat (usec): min=500, max=41875k, avg=43609.08, stdev=1138666.77 00:28:53.597 clat percentiles (usec): 00:28:53.597 | 1.00th=[ 586], 5.00th=[ 799], 10.00th=[ 840], 00:28:53.597 | 20.00th=[ 938], 30.00th=[ 971], 40.00th=[ 1004], 00:28:53.597 | 50.00th=[ 1057], 60.00th=[ 1106], 70.00th=[ 1172], 00:28:53.597 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:28:53.597 | 99.00th=[ 42730], 99.50th=[ 42730], 99.90th=[ 43254], 00:28:53.597 | 99.95th=[17112761], 99.99th=[17112761] 00:28:53.597 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60009msec); 0 zone resets 00:28:53.597 slat (usec): min=9, max=33044, avg=50.27, stdev=842.48 00:28:53.597 clat (usec): min=250, max=951, avg=617.78, stdev=147.11 00:28:53.597 lat (usec): min=274, max=33610, avg=668.05, stdev=854.42 00:28:53.597 clat percentiles (usec): 00:28:53.597 | 1.00th=[ 273], 5.00th=[ 367], 10.00th=[ 416], 20.00th=[ 474], 00:28:53.597 | 30.00th=[ 529], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 676], 00:28:53.597 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 832], 00:28:53.597 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 955], 00:28:53.597 | 99.99th=[ 955] 00:28:53.597 bw ( KiB/s): min= 672, max= 4096, per=100.00%, avg=3072.00, stdev=1631.06, samples=4 00:28:53.597 iops : min= 168, max= 1024, avg=768.00, stdev=407.76, samples=4 00:28:53.597 lat (usec) : 500=13.19%, 750=30.33%, 1000=27.46% 00:28:53.597 lat (msec) : 2=15.72%, 50=13.26%, >=2000=0.03% 00:28:53.597 cpu : usr=0.07%, sys=0.14%, ctx=2893, majf=0, minf=1 00:28:53.597 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.597 issued rwts: total=1352,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.597 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:53.597 00:28:53.597 Run status group 0 (all jobs): 00:28:53.597 READ: bw=90.1KiB/s (92.3kB/s), 90.1KiB/s-90.1KiB/s (92.3kB/s-92.3kB/s), io=5408KiB (5538kB), run=60009-60009msec 00:28:53.597 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60009-60009msec 00:28:53.597 00:28:53.597 Disk stats (read/write): 00:28:53.597 nvme0n1: ios=1402/1536, merge=0/0, ticks=17522/915, in_queue=18437, util=100.00% 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:53.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:53.597 nvmf hotplug test: fio successful as expected 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.597 01:47:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.597 rmmod nvme_tcp 00:28:53.597 rmmod nvme_fabrics 00:28:53.597 rmmod nvme_keyring 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4093289 ']' 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4093289 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 4093289 ']' 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 4093289 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4093289 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:53.597 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4093289' 00:28:53.597 killing process with pid 4093289 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 4093289 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 4093289 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.598 01:47:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.170 01:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:54.170 00:28:54.170 real 1m15.901s 00:28:54.170 user 4m37.067s 00:28:54.170 sys 0m7.602s 00:28:54.170 01:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:54.170 01:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.170 ************************************ 00:28:54.170 END TEST nvmf_initiator_timeout 00:28:54.170 ************************************ 00:28:54.170 01:47:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:54.170 01:47:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:54.170 01:47:20 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:54.170 01:47:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:54.170 01:47:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.418 01:47:28 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:02.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:02.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:02.419 Found net devices under 0000:31:00.0: cvl_0_0 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:02.419 Found net devices under 0000:31:00.1: cvl_0_1 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:29:02.419 01:47:28 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:02.419 01:47:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:02.419 01:47:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:02.419 01:47:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.419 ************************************ 00:29:02.419 START TEST nvmf_perf_adq 00:29:02.419 ************************************ 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:02.419 * Looking for test storage... 00:29:02.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.419 01:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:10.559 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:10.559 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:10.559 Found net devices under 0000:31:00.0: cvl_0_0 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:10.559 Found net devices under 0000:31:00.1: cvl_0_1 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:29:10.559 01:47:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:11.130 01:47:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:13.043 01:47:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:18.328 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:18.328 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:18.328 Found net devices under 0000:31:00.0: cvl_0_0 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:18.328 Found net devices under 0000:31:00.1: cvl_0_1 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:29:18.328 00:29:18.328 --- 10.0.0.2 ping statistics --- 00:29:18.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.328 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:29:18.328 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:29:18.328 00:29:18.328 --- 10.0.0.1 ping statistics --- 00:29:18.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.329 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4116199 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4116199 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4116199 ']' 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:18.329 01:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.329 [2024-07-12 01:47:44.571926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:18.329 [2024-07-12 01:47:44.571972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.329 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.329 [2024-07-12 01:47:44.663947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.588 [2024-07-12 01:47:44.701104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.588 [2024-07-12 01:47:44.701147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.588 [2024-07-12 01:47:44.701156] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.588 [2024-07-12 01:47:44.701164] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.588 [2024-07-12 01:47:44.701170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.588 [2024-07-12 01:47:44.701263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.588 [2024-07-12 01:47:44.701451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.588 [2024-07-12 01:47:44.701451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.588 [2024-07-12 01:47:44.701330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.159 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.421 [2024-07-12 01:47:45.519120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.421 Malloc1 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.421 [2024-07-12 01:47:45.578508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4116526 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:29:19.421 01:47:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:19.421 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.335 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:29:21.335 01:47:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.335 01:47:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:21.335 01:47:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.335 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:29:21.335 "tick_rate": 2400000000, 00:29:21.335 "poll_groups": [ 00:29:21.335 { 00:29:21.335 "name": "nvmf_tgt_poll_group_000", 00:29:21.335 "admin_qpairs": 1, 00:29:21.335 "io_qpairs": 1, 00:29:21.335 "current_admin_qpairs": 1, 00:29:21.335 "current_io_qpairs": 1, 00:29:21.335 "pending_bdev_io": 0, 00:29:21.335 "completed_nvme_io": 20330, 00:29:21.335 "transports": [ 00:29:21.335 { 00:29:21.335 "trtype": "TCP" 00:29:21.335 } 00:29:21.335 ] 00:29:21.335 }, 00:29:21.335 { 00:29:21.335 "name": "nvmf_tgt_poll_group_001", 00:29:21.335 "admin_qpairs": 0, 00:29:21.335 "io_qpairs": 1, 00:29:21.335 "current_admin_qpairs": 0, 00:29:21.335 "current_io_qpairs": 1, 00:29:21.335 "pending_bdev_io": 0, 00:29:21.335 "completed_nvme_io": 28866, 00:29:21.335 "transports": [ 00:29:21.335 { 00:29:21.335 "trtype": "TCP" 00:29:21.335 } 00:29:21.335 ] 00:29:21.335 }, 00:29:21.335 { 00:29:21.335 "name": "nvmf_tgt_poll_group_002", 00:29:21.335 "admin_qpairs": 0, 00:29:21.335 "io_qpairs": 1, 00:29:21.335 "current_admin_qpairs": 0, 00:29:21.335 "current_io_qpairs": 1, 00:29:21.335 "pending_bdev_io": 0, 00:29:21.335 "completed_nvme_io": 20427, 00:29:21.335 "transports": [ 00:29:21.335 { 00:29:21.335 "trtype": "TCP" 00:29:21.335 } 00:29:21.335 ] 00:29:21.335 }, 00:29:21.335 { 00:29:21.335 "name": "nvmf_tgt_poll_group_003", 00:29:21.335 "admin_qpairs": 0, 00:29:21.335 "io_qpairs": 1, 00:29:21.335 "current_admin_qpairs": 0, 00:29:21.335 "current_io_qpairs": 1, 00:29:21.335 "pending_bdev_io": 0, 00:29:21.335 "completed_nvme_io": 20331, 00:29:21.335 "transports": [ 00:29:21.335 { 00:29:21.335 "trtype": "TCP" 00:29:21.335 } 00:29:21.335 ] 00:29:21.335 } 00:29:21.335 ] 00:29:21.335 }' 00:29:21.336 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:21.336 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:29:21.336 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:29:21.336 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:29:21.336 01:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4116526 00:29:29.475 Initializing NVMe Controllers 00:29:29.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:29.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:29.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:29.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:29.475 Initialization complete. Launching workers. 00:29:29.475 ======================================================== 00:29:29.475 Latency(us) 00:29:29.475 Device Information : IOPS MiB/s Average min max 00:29:29.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13817.80 53.98 4632.06 920.67 9974.92 00:29:29.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15332.00 59.89 4174.20 893.68 8454.28 00:29:29.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13838.30 54.06 4624.27 957.88 11358.91 00:29:29.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11398.50 44.53 5614.40 1723.66 11004.33 00:29:29.475 ======================================================== 00:29:29.475 Total : 54386.60 212.45 4706.88 893.68 11358.91 00:29:29.475 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:29.475 rmmod nvme_tcp 00:29:29.475 rmmod nvme_fabrics 00:29:29.475 rmmod nvme_keyring 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4116199 ']' 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4116199 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4116199 ']' 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4116199 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:29.475 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4116199 00:29:29.734 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:29.734 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:29.734 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4116199' 00:29:29.734 killing process with pid 4116199 00:29:29.734 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4116199 00:29:29.734 01:47:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4116199 00:29:29.734 01:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.734 01:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.734 01:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.734 01:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.734 01:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.735 01:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.735 01:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.735 01:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.277 01:47:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.277 01:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:32.277 01:47:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:33.662 01:47:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:35.572 01:48:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:40.857 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:40.857 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:40.857 Found net devices under 0000:31:00.0: cvl_0_0 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:40.857 Found net devices under 0000:31:00.1: cvl_0_1 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.857 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:40.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:29:40.858 00:29:40.858 --- 10.0.0.2 ping statistics --- 00:29:40.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.858 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:29:40.858 00:29:40.858 --- 10.0.0.1 ping statistics --- 00:29:40.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.858 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:40.858 net.core.busy_poll = 1 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:40.858 net.core.busy_read = 1 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:40.858 01:48:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:40.858 01:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:40.858 01:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:40.858 01:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:41.119 01:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:41.119 01:48:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:41.119 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:41.119 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.119 01:48:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4121106 00:29:41.119 01:48:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4121106 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4121106 ']' 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:41.120 01:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.120 [2024-07-12 01:48:07.300027] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:41.120 [2024-07-12 01:48:07.300092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.120 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.120 [2024-07-12 01:48:07.381526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.120 [2024-07-12 01:48:07.420113] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.120 [2024-07-12 01:48:07.420160] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.120 [2024-07-12 01:48:07.420168] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.120 [2024-07-12 01:48:07.420175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.120 [2024-07-12 01:48:07.420180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.120 [2024-07-12 01:48:07.420274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.120 [2024-07-12 01:48:07.420385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.120 [2024-07-12 01:48:07.420385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.120 [2024-07-12 01:48:07.420339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.062 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 [2024-07-12 01:48:08.266470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 Malloc1 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:42.063 [2024-07-12 01:48:08.325822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4121312 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:42.063 01:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:42.063 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:44.607 "tick_rate": 2400000000, 00:29:44.607 "poll_groups": [ 00:29:44.607 { 00:29:44.607 "name": "nvmf_tgt_poll_group_000", 00:29:44.607 "admin_qpairs": 1, 00:29:44.607 "io_qpairs": 3, 00:29:44.607 "current_admin_qpairs": 1, 00:29:44.607 "current_io_qpairs": 3, 00:29:44.607 "pending_bdev_io": 0, 00:29:44.607 "completed_nvme_io": 30192, 00:29:44.607 "transports": [ 00:29:44.607 { 00:29:44.607 "trtype": "TCP" 00:29:44.607 } 00:29:44.607 ] 00:29:44.607 }, 00:29:44.607 { 00:29:44.607 "name": "nvmf_tgt_poll_group_001", 00:29:44.607 "admin_qpairs": 0, 00:29:44.607 "io_qpairs": 1, 00:29:44.607 "current_admin_qpairs": 0, 00:29:44.607 "current_io_qpairs": 1, 00:29:44.607 "pending_bdev_io": 0, 00:29:44.607 "completed_nvme_io": 35950, 00:29:44.607 "transports": [ 00:29:44.607 { 00:29:44.607 "trtype": "TCP" 00:29:44.607 } 00:29:44.607 ] 00:29:44.607 }, 00:29:44.607 { 00:29:44.607 "name": "nvmf_tgt_poll_group_002", 00:29:44.607 "admin_qpairs": 0, 00:29:44.607 "io_qpairs": 0, 00:29:44.607 "current_admin_qpairs": 0, 00:29:44.607 "current_io_qpairs": 0, 00:29:44.607 "pending_bdev_io": 0, 00:29:44.607 "completed_nvme_io": 0, 00:29:44.607 "transports": [ 00:29:44.607 { 00:29:44.607 "trtype": "TCP" 00:29:44.607 } 00:29:44.607 ] 00:29:44.607 }, 00:29:44.607 { 00:29:44.607 "name": "nvmf_tgt_poll_group_003", 00:29:44.607 "admin_qpairs": 0, 00:29:44.607 "io_qpairs": 0, 00:29:44.607 "current_admin_qpairs": 0, 00:29:44.607 "current_io_qpairs": 0, 00:29:44.607 "pending_bdev_io": 0, 00:29:44.607 "completed_nvme_io": 0, 00:29:44.607 "transports": [ 00:29:44.607 { 00:29:44.607 "trtype": "TCP" 00:29:44.607 } 00:29:44.607 ] 00:29:44.607 } 00:29:44.607 ] 00:29:44.607 }' 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:44.607 01:48:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4121312 00:29:52.900 Initializing NVMe Controllers 00:29:52.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:52.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:52.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:52.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:52.900 Initialization complete. Launching workers. 00:29:52.900 ======================================================== 00:29:52.900 Latency(us) 00:29:52.900 Device Information : IOPS MiB/s Average min max 00:29:52.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5770.70 22.54 11095.72 1369.80 56481.19 00:29:52.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7917.30 30.93 8109.04 1207.17 55161.82 00:29:52.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 19832.30 77.47 3226.77 1151.62 44478.23 00:29:52.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6524.00 25.48 9811.37 1039.65 58595.57 00:29:52.900 ======================================================== 00:29:52.900 Total : 40044.29 156.42 6398.80 1039.65 58595.57 00:29:52.900 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.900 rmmod nvme_tcp 00:29:52.900 rmmod nvme_fabrics 00:29:52.900 rmmod nvme_keyring 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4121106 ']' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4121106 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4121106 ']' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4121106 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4121106 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4121106' 00:29:52.900 killing process with pid 4121106 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4121106 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4121106 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.900 01:48:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.200 01:48:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.200 01:48:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:56.200 00:29:56.200 real 0m53.707s 00:29:56.200 user 2m50.230s 00:29:56.200 sys 0m10.662s 00:29:56.200 01:48:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:56.200 01:48:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:56.200 ************************************ 00:29:56.200 END TEST nvmf_perf_adq 00:29:56.200 ************************************ 00:29:56.200 01:48:21 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:56.200 01:48:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:56.200 01:48:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:56.200 01:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.200 ************************************ 00:29:56.200 START TEST nvmf_shutdown 00:29:56.200 ************************************ 00:29:56.200 01:48:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:56.200 * Looking for test storage... 00:29:56.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.200 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:56.201 ************************************ 00:29:56.201 START TEST nvmf_shutdown_tc1 00:29:56.201 ************************************ 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.201 01:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:04.342 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.342 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:04.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:04.343 Found net devices under 0000:31:00.0: cvl_0_0 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:04.343 Found net devices under 0000:31:00.1: cvl_0_1 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:04.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:30:04.343 00:30:04.343 --- 10.0.0.2 ping statistics --- 00:30:04.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.343 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:30:04.343 00:30:04.343 --- 10.0.0.1 ping statistics --- 00:30:04.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.343 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4128722 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4128722 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4128722 ']' 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:04.343 01:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.343 [2024-07-12 01:48:30.572750] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:04.343 [2024-07-12 01:48:30.572811] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.343 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.343 [2024-07-12 01:48:30.667921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.604 [2024-07-12 01:48:30.715838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.604 [2024-07-12 01:48:30.715892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.604 [2024-07-12 01:48:30.715900] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.604 [2024-07-12 01:48:30.715907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.604 [2024-07-12 01:48:30.715913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.604 [2024-07-12 01:48:30.716036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.604 [2024-07-12 01:48:30.716198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.604 [2024-07-12 01:48:30.716326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.604 [2024-07-12 01:48:30.716481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.176 [2024-07-12 01:48:31.409879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.176 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.176 Malloc1 00:30:05.176 [2024-07-12 01:48:31.513453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.176 Malloc2 00:30:05.437 Malloc3 00:30:05.437 Malloc4 00:30:05.437 Malloc5 00:30:05.437 Malloc6 00:30:05.437 Malloc7 00:30:05.437 Malloc8 00:30:05.698 Malloc9 00:30:05.698 Malloc10 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4129102 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4129102 /var/tmp/bdevperf.sock 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4129102 ']' 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:05.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 [2024-07-12 01:48:31.974395] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:05.698 [2024-07-12 01:48:31.974450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.698 "hdgst": ${hdgst:-false}, 00:30:05.698 "ddgst": ${ddgst:-false} 00:30:05.698 }, 00:30:05.698 "method": "bdev_nvme_attach_controller" 00:30:05.698 } 00:30:05.698 EOF 00:30:05.698 )") 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.698 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.698 { 00:30:05.698 "params": { 00:30:05.698 "name": "Nvme$subsystem", 00:30:05.698 "trtype": "$TEST_TRANSPORT", 00:30:05.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.698 "adrfam": "ipv4", 00:30:05.698 "trsvcid": "$NVMF_PORT", 00:30:05.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.699 "hdgst": ${hdgst:-false}, 00:30:05.699 "ddgst": ${ddgst:-false} 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 } 00:30:05.699 EOF 00:30:05.699 )") 00:30:05.699 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.699 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.699 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.699 { 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme$subsystem", 00:30:05.699 "trtype": "$TEST_TRANSPORT", 00:30:05.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "$NVMF_PORT", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.699 "hdgst": ${hdgst:-false}, 00:30:05.699 "ddgst": ${ddgst:-false} 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 } 00:30:05.699 EOF 00:30:05.699 )") 00:30:05.699 01:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:05.699 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.699 01:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:05.699 01:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:05.699 01:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme1", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme2", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme3", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme4", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme5", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme6", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme7", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme8", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme9", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 },{ 00:30:05.699 "params": { 00:30:05.699 "name": "Nvme10", 00:30:05.699 "trtype": "tcp", 00:30:05.699 "traddr": "10.0.0.2", 00:30:05.699 "adrfam": "ipv4", 00:30:05.699 "trsvcid": "4420", 00:30:05.699 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:05.699 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:05.699 "hdgst": false, 00:30:05.699 "ddgst": false 00:30:05.699 }, 00:30:05.699 "method": "bdev_nvme_attach_controller" 00:30:05.699 }' 00:30:05.699 [2024-07-12 01:48:32.042035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.960 [2024-07-12 01:48:32.073206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4129102 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:07.342 01:48:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:08.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4129102 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4128722 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.285 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.285 { 00:30:08.285 "params": { 00:30:08.285 "name": "Nvme$subsystem", 00:30:08.285 "trtype": "$TEST_TRANSPORT", 00:30:08.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.285 "adrfam": "ipv4", 00:30:08.285 "trsvcid": "$NVMF_PORT", 00:30:08.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.285 "hdgst": ${hdgst:-false}, 00:30:08.285 "ddgst": ${ddgst:-false} 00:30:08.285 }, 00:30:08.285 "method": "bdev_nvme_attach_controller" 00:30:08.285 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 [2024-07-12 01:48:34.412407] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:08.286 [2024-07-12 01:48:34.412462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129476 ] 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.286 { 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme$subsystem", 00:30:08.286 "trtype": "$TEST_TRANSPORT", 00:30:08.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "$NVMF_PORT", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.286 "hdgst": ${hdgst:-false}, 00:30:08.286 "ddgst": ${ddgst:-false} 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 } 00:30:08.286 EOF 00:30:08.286 )") 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:08.286 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:08.286 01:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme1", 00:30:08.286 "trtype": "tcp", 00:30:08.286 "traddr": "10.0.0.2", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "4420", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:08.286 "hdgst": false, 00:30:08.286 "ddgst": false 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 },{ 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme2", 00:30:08.286 "trtype": "tcp", 00:30:08.286 "traddr": "10.0.0.2", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "4420", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:08.286 "hdgst": false, 00:30:08.286 "ddgst": false 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 },{ 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme3", 00:30:08.286 "trtype": "tcp", 00:30:08.286 "traddr": "10.0.0.2", 00:30:08.286 "adrfam": "ipv4", 00:30:08.286 "trsvcid": "4420", 00:30:08.286 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:08.286 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:08.286 "hdgst": false, 00:30:08.286 "ddgst": false 00:30:08.286 }, 00:30:08.286 "method": "bdev_nvme_attach_controller" 00:30:08.286 },{ 00:30:08.286 "params": { 00:30:08.286 "name": "Nvme4", 00:30:08.286 "trtype": "tcp", 00:30:08.286 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 },{ 00:30:08.287 "params": { 00:30:08.287 "name": "Nvme5", 00:30:08.287 "trtype": "tcp", 00:30:08.287 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 },{ 00:30:08.287 "params": { 00:30:08.287 "name": "Nvme6", 00:30:08.287 "trtype": "tcp", 00:30:08.287 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 },{ 00:30:08.287 "params": { 00:30:08.287 "name": "Nvme7", 00:30:08.287 "trtype": "tcp", 00:30:08.287 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 },{ 00:30:08.287 "params": { 00:30:08.287 "name": "Nvme8", 00:30:08.287 "trtype": "tcp", 00:30:08.287 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 },{ 00:30:08.287 "params": { 00:30:08.287 "name": "Nvme9", 00:30:08.287 "trtype": "tcp", 00:30:08.287 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 },{ 00:30:08.287 "params": { 00:30:08.287 "name": "Nvme10", 00:30:08.287 "trtype": "tcp", 00:30:08.287 "traddr": "10.0.0.2", 00:30:08.287 "adrfam": "ipv4", 00:30:08.287 "trsvcid": "4420", 00:30:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:08.287 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:08.287 "hdgst": false, 00:30:08.287 "ddgst": false 00:30:08.287 }, 00:30:08.287 "method": "bdev_nvme_attach_controller" 00:30:08.287 }' 00:30:08.287 [2024-07-12 01:48:34.479676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.287 [2024-07-12 01:48:34.510909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.671 Running I/O for 1 seconds... 00:30:10.614 00:30:10.614 Latency(us) 00:30:10.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.614 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme1n1 : 1.04 246.72 15.42 0.00 0.00 256404.05 26323.63 235929.60 00:30:10.614 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme2n1 : 1.04 246.22 15.39 0.00 0.00 252409.81 20643.84 237677.23 00:30:10.614 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme3n1 : 1.12 229.16 14.32 0.00 0.00 267038.51 22282.24 262144.00 00:30:10.614 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme4n1 : 1.17 274.27 17.14 0.00 0.00 218772.14 26323.63 246415.36 00:30:10.614 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme5n1 : 1.15 222.36 13.90 0.00 0.00 266065.28 18022.40 246415.36 00:30:10.614 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme6n1 : 1.16 220.39 13.77 0.00 0.00 263975.47 20316.16 277872.64 00:30:10.614 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme7n1 : 1.16 274.97 17.19 0.00 0.00 207675.90 19442.35 235929.60 00:30:10.614 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme8n1 : 1.17 273.25 17.08 0.00 0.00 205451.95 19114.67 239424.85 00:30:10.614 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme9n1 : 1.16 221.35 13.83 0.00 0.00 248522.24 18896.21 249910.61 00:30:10.614 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.614 Verification LBA range: start 0x0 length 0x400 00:30:10.614 Nvme10n1 : 1.17 272.38 17.02 0.00 0.00 198655.74 12014.93 242920.11 00:30:10.614 =================================================================================================================== 00:30:10.614 Total : 2481.06 155.07 0.00 0.00 235691.82 12014.93 277872.64 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.875 rmmod nvme_tcp 00:30:10.875 rmmod nvme_fabrics 00:30:10.875 rmmod nvme_keyring 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4128722 ']' 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4128722 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 4128722 ']' 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 4128722 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4128722 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4128722' 00:30:10.875 killing process with pid 4128722 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 4128722 00:30:10.875 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 4128722 00:30:11.134 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:11.134 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:11.134 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:11.135 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.135 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:11.135 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.135 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.135 01:48:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:13.678 00:30:13.678 real 0m17.415s 00:30:13.678 user 0m33.002s 00:30:13.678 sys 0m7.386s 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:13.678 ************************************ 00:30:13.678 END TEST nvmf_shutdown_tc1 00:30:13.678 ************************************ 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:13.678 ************************************ 00:30:13.678 START TEST nvmf_shutdown_tc2 00:30:13.678 ************************************ 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:13.678 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:13.678 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:13.678 Found net devices under 0000:31:00.0: cvl_0_0 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:13.678 Found net devices under 0000:31:00.1: cvl_0_1 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:30:13.678 00:30:13.678 --- 10.0.0.2 ping statistics --- 00:30:13.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.678 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:30:13.678 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:30:13.679 00:30:13.679 --- 10.0.0.1 ping statistics --- 00:30:13.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.679 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4130592 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4130592 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4130592 ']' 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:13.679 01:48:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.938 [2024-07-12 01:48:40.036329] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:13.938 [2024-07-12 01:48:40.036399] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.938 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.938 [2024-07-12 01:48:40.133327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.938 [2024-07-12 01:48:40.171133] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.938 [2024-07-12 01:48:40.171171] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.938 [2024-07-12 01:48:40.171176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.938 [2024-07-12 01:48:40.171181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.938 [2024-07-12 01:48:40.171185] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.938 [2024-07-12 01:48:40.171242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.938 [2024-07-12 01:48:40.171360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.938 [2024-07-12 01:48:40.171599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.938 [2024-07-12 01:48:40.171599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.509 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.509 [2024-07-12 01:48:40.864647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.769 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.770 01:48:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.770 Malloc1 00:30:14.770 [2024-07-12 01:48:40.963615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.770 Malloc2 00:30:14.770 Malloc3 00:30:14.770 Malloc4 00:30:14.770 Malloc5 00:30:15.031 Malloc6 00:30:15.031 Malloc7 00:30:15.031 Malloc8 00:30:15.031 Malloc9 00:30:15.031 Malloc10 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4130974 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4130974 /var/tmp/bdevperf.sock 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4130974 ']' 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.031 { 00:30:15.031 "params": { 00:30:15.031 "name": "Nvme$subsystem", 00:30:15.031 "trtype": "$TEST_TRANSPORT", 00:30:15.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.031 "adrfam": "ipv4", 00:30:15.031 "trsvcid": "$NVMF_PORT", 00:30:15.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.031 "hdgst": ${hdgst:-false}, 00:30:15.031 "ddgst": ${ddgst:-false} 00:30:15.031 }, 00:30:15.031 "method": "bdev_nvme_attach_controller" 00:30:15.031 } 00:30:15.031 EOF 00:30:15.031 )") 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.031 { 00:30:15.031 "params": { 00:30:15.031 "name": "Nvme$subsystem", 00:30:15.031 "trtype": "$TEST_TRANSPORT", 00:30:15.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.031 "adrfam": "ipv4", 00:30:15.031 "trsvcid": "$NVMF_PORT", 00:30:15.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.031 "hdgst": ${hdgst:-false}, 00:30:15.031 "ddgst": ${ddgst:-false} 00:30:15.031 }, 00:30:15.031 "method": "bdev_nvme_attach_controller" 00:30:15.031 } 00:30:15.031 EOF 00:30:15.031 )") 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.031 { 00:30:15.031 "params": { 00:30:15.031 "name": "Nvme$subsystem", 00:30:15.031 "trtype": "$TEST_TRANSPORT", 00:30:15.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.031 "adrfam": "ipv4", 00:30:15.031 "trsvcid": "$NVMF_PORT", 00:30:15.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.031 "hdgst": ${hdgst:-false}, 00:30:15.031 "ddgst": ${ddgst:-false} 00:30:15.031 }, 00:30:15.031 "method": "bdev_nvme_attach_controller" 00:30:15.031 } 00:30:15.031 EOF 00:30:15.031 )") 00:30:15.031 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.290 { 00:30:15.290 "params": { 00:30:15.290 "name": "Nvme$subsystem", 00:30:15.290 "trtype": "$TEST_TRANSPORT", 00:30:15.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.290 "adrfam": "ipv4", 00:30:15.290 "trsvcid": "$NVMF_PORT", 00:30:15.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.290 "hdgst": ${hdgst:-false}, 00:30:15.290 "ddgst": ${ddgst:-false} 00:30:15.290 }, 00:30:15.290 "method": "bdev_nvme_attach_controller" 00:30:15.290 } 00:30:15.290 EOF 00:30:15.290 )") 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.290 { 00:30:15.290 "params": { 00:30:15.290 "name": "Nvme$subsystem", 00:30:15.290 "trtype": "$TEST_TRANSPORT", 00:30:15.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.290 "adrfam": "ipv4", 00:30:15.290 "trsvcid": "$NVMF_PORT", 00:30:15.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.290 "hdgst": ${hdgst:-false}, 00:30:15.290 "ddgst": ${ddgst:-false} 00:30:15.290 }, 00:30:15.290 "method": "bdev_nvme_attach_controller" 00:30:15.290 } 00:30:15.290 EOF 00:30:15.290 )") 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.290 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.290 { 00:30:15.290 "params": { 00:30:15.290 "name": "Nvme$subsystem", 00:30:15.290 "trtype": "$TEST_TRANSPORT", 00:30:15.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.290 "adrfam": "ipv4", 00:30:15.290 "trsvcid": "$NVMF_PORT", 00:30:15.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.291 "hdgst": ${hdgst:-false}, 00:30:15.291 "ddgst": ${ddgst:-false} 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 } 00:30:15.291 EOF 00:30:15.291 )") 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.291 [2024-07-12 01:48:41.409799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:15.291 [2024-07-12 01:48:41.409850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130974 ] 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.291 { 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme$subsystem", 00:30:15.291 "trtype": "$TEST_TRANSPORT", 00:30:15.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "$NVMF_PORT", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.291 "hdgst": ${hdgst:-false}, 00:30:15.291 "ddgst": ${ddgst:-false} 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 } 00:30:15.291 EOF 00:30:15.291 )") 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.291 { 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme$subsystem", 00:30:15.291 "trtype": "$TEST_TRANSPORT", 00:30:15.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "$NVMF_PORT", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.291 "hdgst": ${hdgst:-false}, 00:30:15.291 "ddgst": ${ddgst:-false} 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 } 00:30:15.291 EOF 00:30:15.291 )") 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.291 { 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme$subsystem", 00:30:15.291 "trtype": "$TEST_TRANSPORT", 00:30:15.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "$NVMF_PORT", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.291 "hdgst": ${hdgst:-false}, 00:30:15.291 "ddgst": ${ddgst:-false} 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 } 00:30:15.291 EOF 00:30:15.291 )") 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.291 { 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme$subsystem", 00:30:15.291 "trtype": "$TEST_TRANSPORT", 00:30:15.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "$NVMF_PORT", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.291 "hdgst": ${hdgst:-false}, 00:30:15.291 "ddgst": ${ddgst:-false} 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 } 00:30:15.291 EOF 00:30:15.291 )") 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:15.291 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:30:15.291 01:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme1", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme2", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme3", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme4", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme5", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme6", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme7", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme8", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme9", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 },{ 00:30:15.291 "params": { 00:30:15.291 "name": "Nvme10", 00:30:15.291 "trtype": "tcp", 00:30:15.291 "traddr": "10.0.0.2", 00:30:15.291 "adrfam": "ipv4", 00:30:15.291 "trsvcid": "4420", 00:30:15.291 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:15.291 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:15.291 "hdgst": false, 00:30:15.291 "ddgst": false 00:30:15.291 }, 00:30:15.291 "method": "bdev_nvme_attach_controller" 00:30:15.291 }' 00:30:15.291 [2024-07-12 01:48:41.476650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.291 [2024-07-12 01:48:41.507923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.676 Running I/O for 10 seconds... 00:30:16.676 01:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:16.676 01:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:30:16.676 01:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:16.676 01:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.676 01:48:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:16.937 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:17.198 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:17.458 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:17.458 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:17.458 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:17.458 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:17.458 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.458 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4130974 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4130974 ']' 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4130974 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4130974 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4130974' 00:30:17.720 killing process with pid 4130974 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4130974 00:30:17.720 01:48:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4130974 00:30:17.720 Received shutdown signal, test time was about 0.973722 seconds 00:30:17.720 00:30:17.720 Latency(us) 00:30:17.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.720 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme1n1 : 0.95 268.82 16.80 0.00 0.00 234778.88 38666.24 221948.59 00:30:17.720 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme2n1 : 0.97 207.64 12.98 0.00 0.00 284942.03 4341.76 241172.48 00:30:17.720 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme3n1 : 0.96 267.03 16.69 0.00 0.00 227337.39 20097.71 244667.73 00:30:17.720 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme4n1 : 0.93 205.44 12.84 0.00 0.00 288870.40 20097.71 248162.99 00:30:17.720 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme5n1 : 0.96 265.34 16.58 0.00 0.00 219183.79 32549.55 228939.09 00:30:17.720 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme6n1 : 0.95 269.11 16.82 0.00 0.00 211374.51 22063.79 218453.33 00:30:17.720 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme7n1 : 0.96 265.62 16.60 0.00 0.00 209423.36 15837.87 246415.36 00:30:17.720 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme8n1 : 0.96 267.63 16.73 0.00 0.00 203123.20 21626.88 242920.11 00:30:17.720 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme9n1 : 0.94 204.52 12.78 0.00 0.00 258696.82 16820.91 244667.73 00:30:17.720 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:17.720 Verification LBA range: start 0x0 length 0x400 00:30:17.720 Nvme10n1 : 0.95 202.92 12.68 0.00 0.00 254953.24 14854.83 265639.25 00:30:17.720 =================================================================================================================== 00:30:17.720 Total : 2424.08 151.50 0.00 0.00 235859.47 4341.76 265639.25 00:30:17.981 01:48:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:18.925 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4130592 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:18.926 rmmod nvme_tcp 00:30:18.926 rmmod nvme_fabrics 00:30:18.926 rmmod nvme_keyring 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4130592 ']' 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4130592 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4130592 ']' 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4130592 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4130592 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4130592' 00:30:18.926 killing process with pid 4130592 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4130592 00:30:18.926 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4130592 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.187 01:48:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:21.737 00:30:21.737 real 0m7.931s 00:30:21.737 user 0m23.889s 00:30:21.737 sys 0m1.308s 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.737 ************************************ 00:30:21.737 END TEST nvmf_shutdown_tc2 00:30:21.737 ************************************ 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:21.737 ************************************ 00:30:21.737 START TEST nvmf_shutdown_tc3 00:30:21.737 ************************************ 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.737 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.737 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.737 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.738 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.738 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:21.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:30:21.738 00:30:21.738 --- 10.0.0.2 ping statistics --- 00:30:21.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.738 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:30:21.738 00:30:21.738 --- 10.0.0.1 ping statistics --- 00:30:21.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.738 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4132432 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4132432 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4132432 ']' 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:21.738 01:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.738 [2024-07-12 01:48:48.075237] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:21.738 [2024-07-12 01:48:48.075307] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.001 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.001 [2024-07-12 01:48:48.170644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.001 [2024-07-12 01:48:48.203459] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.001 [2024-07-12 01:48:48.203498] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.001 [2024-07-12 01:48:48.203504] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.001 [2024-07-12 01:48:48.203509] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.001 [2024-07-12 01:48:48.203513] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.001 [2024-07-12 01:48:48.203617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.001 [2024-07-12 01:48:48.203775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.001 [2024-07-12 01:48:48.203930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.001 [2024-07-12 01:48:48.203932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:22.573 [2024-07-12 01:48:48.878546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.573 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.833 01:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:22.833 Malloc1 00:30:22.833 [2024-07-12 01:48:48.977449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.833 Malloc2 00:30:22.833 Malloc3 00:30:22.833 Malloc4 00:30:22.833 Malloc5 00:30:22.833 Malloc6 00:30:22.833 Malloc7 00:30:23.092 Malloc8 00:30:23.092 Malloc9 00:30:23.092 Malloc10 00:30:23.092 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.092 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:23.092 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.092 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4132681 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4132681 /var/tmp/bdevperf.sock 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4132681 ']' 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 [2024-07-12 01:48:49.430111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:23.093 [2024-07-12 01:48:49.430165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4132681 ] 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.093 "ddgst": ${ddgst:-false} 00:30:23.093 }, 00:30:23.093 "method": "bdev_nvme_attach_controller" 00:30:23.093 } 00:30:23.093 EOF 00:30:23.093 )") 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.093 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.093 { 00:30:23.093 "params": { 00:30:23.093 "name": "Nvme$subsystem", 00:30:23.093 "trtype": "$TEST_TRANSPORT", 00:30:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.093 "adrfam": "ipv4", 00:30:23.093 "trsvcid": "$NVMF_PORT", 00:30:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.093 "hdgst": ${hdgst:-false}, 00:30:23.094 "ddgst": ${ddgst:-false} 00:30:23.094 }, 00:30:23.094 "method": "bdev_nvme_attach_controller" 00:30:23.094 } 00:30:23.094 EOF 00:30:23.094 )") 00:30:23.094 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.355 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.355 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.355 { 00:30:23.355 "params": { 00:30:23.355 "name": "Nvme$subsystem", 00:30:23.355 "trtype": "$TEST_TRANSPORT", 00:30:23.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.355 "adrfam": "ipv4", 00:30:23.355 "trsvcid": "$NVMF_PORT", 00:30:23.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.355 "hdgst": ${hdgst:-false}, 00:30:23.355 "ddgst": ${ddgst:-false} 00:30:23.355 }, 00:30:23.355 "method": "bdev_nvme_attach_controller" 00:30:23.355 } 00:30:23.355 EOF 00:30:23.355 )") 00:30:23.355 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.355 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.355 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.355 { 00:30:23.355 "params": { 00:30:23.355 "name": "Nvme$subsystem", 00:30:23.355 "trtype": "$TEST_TRANSPORT", 00:30:23.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.355 "adrfam": "ipv4", 00:30:23.355 "trsvcid": "$NVMF_PORT", 00:30:23.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.356 "hdgst": ${hdgst:-false}, 00:30:23.356 "ddgst": ${ddgst:-false} 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 } 00:30:23.356 EOF 00:30:23.356 )") 00:30:23.356 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:23.356 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.356 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:23.356 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:23.356 01:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme1", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme2", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme3", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme4", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme5", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme6", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme7", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme8", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme9", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 },{ 00:30:23.356 "params": { 00:30:23.356 "name": "Nvme10", 00:30:23.356 "trtype": "tcp", 00:30:23.356 "traddr": "10.0.0.2", 00:30:23.356 "adrfam": "ipv4", 00:30:23.356 "trsvcid": "4420", 00:30:23.356 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:23.356 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:23.356 "hdgst": false, 00:30:23.356 "ddgst": false 00:30:23.356 }, 00:30:23.356 "method": "bdev_nvme_attach_controller" 00:30:23.356 }' 00:30:23.356 [2024-07-12 01:48:49.497047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.356 [2024-07-12 01:48:49.528181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.741 Running I/O for 10 seconds... 00:30:24.741 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:24.741 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:30:24.741 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:24.741 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.741 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:25.029 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:25.289 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4132432 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 4132432 ']' 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 4132432 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:25.549 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4132432 00:30:25.825 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:25.825 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:25.825 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4132432' 00:30:25.825 killing process with pid 4132432 00:30:25.825 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 4132432 00:30:25.825 01:48:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 4132432 00:30:25.825 [2024-07-12 01:48:51.941454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.825 [2024-07-12 01:48:51.941736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.941788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ae810 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.942996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.943076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1210 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.826 [2024-07-12 01:48:51.944423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.944649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aecb0 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.945997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.827 [2024-07-12 01:48:51.946093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.946198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af150 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.947572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af610 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.948654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.948665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.948670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.828 [2024-07-12 01:48:51.948675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.948965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff50 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.829 [2024-07-12 01:48:51.949869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.949995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.950026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404950 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403b70 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9a610 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256a5c0 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256af50 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.830 [2024-07-12 01:48:51.953890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.830 [2024-07-12 01:48:51.953897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9f80 is same with the state(5) to be set 00:30:25.830 [2024-07-12 01:48:51.953919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.953927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.953935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.953942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.953949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.953957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.953966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.953973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.953980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cde60 is same with the state(5) to be set 00:30:25.831 [2024-07-12 01:48:51.954002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.954011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.954019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.954026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.954034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.954041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.954049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.831 [2024-07-12 01:48:51.954057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.954063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0740 is same with the state(5) to be set 00:30:25.831 [2024-07-12 01:48:51.955874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.955896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.955912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.955920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.955930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.955938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.955947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.955954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.955963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.955971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.955980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.955987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.955996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.831 [2024-07-12 01:48:51.956453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.831 [2024-07-12 01:48:51.956461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.956949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.832 [2024-07-12 01:48:51.956957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.832 [2024-07-12 01:48:51.957010] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2399f40 was disconnected and freed. reset controller. 00:30:25.832 [2024-07-12 01:48:51.958733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:25.832 [2024-07-12 01:48:51.958737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256a5c0 (9): Bad file descriptor 00:30:25.832 [2024-07-12 01:48:51.958770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b03f0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.958894] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.832 [2024-07-12 01:48:51.958980] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.832 [2024-07-12 01:48:51.959120] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.832 [2024-07-12 01:48:51.959170] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.832 [2024-07-12 01:48:51.959207] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.832 [2024-07-12 01:48:51.959259] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.832 [2024-07-12 01:48:51.959510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.959525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.959530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.959535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.832 [2024-07-12 01:48:51.959540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.959810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08b0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.833 [2024-07-12 01:48:51.960037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x256a5c0 with addr=10.0.0.2, port=4420 00:30:25.833 [2024-07-12 01:48:51.960046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256a5c0 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256a5c0 (9): Bad file descriptor 00:30:25.833 [2024-07-12 01:48:51.960254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.833 [2024-07-12 01:48:51.960293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:25.834 [2024-07-12 01:48:51.960358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with t[2024-07-12 01:48:51.960364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] contrhe state(5) to be set 00:30:25.834 oller reinitialization failed 00:30:25.834 [2024-07-12 01:48:51.960372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:25.834 [2024-07-12 01:48:51.960377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960432] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.834 [2024-07-12 01:48:51.960437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.834 [2024-07-12 01:48:51.960515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.834 [2024-07-12 01:48:51.960558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.835 [2024-07-12 01:48:51.960562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.835 [2024-07-12 01:48:51.960567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0d50 is same with the state(5) to be set 00:30:25.835 [2024-07-12 01:48:51.960660] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.835 [2024-07-12 01:48:51.961770] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:25.835 [2024-07-12 01:48:51.963368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2404950 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.963402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fd0a0 is same with the state(5) to be set 00:30:25.835 [2024-07-12 01:48:51.963483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403b70 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.963501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9a610 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.963525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.835 [2024-07-12 01:48:51.963582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.963589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245aaf0 is same with the state(5) to be set 00:30:25.835 [2024-07-12 01:48:51.963606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256af50 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.963621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9f80 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.963636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cde60 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.963651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0740 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.969284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:25.835 [2024-07-12 01:48:51.969723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.835 [2024-07-12 01:48:51.969737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x256a5c0 with addr=10.0.0.2, port=4420 00:30:25.835 [2024-07-12 01:48:51.969745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256a5c0 is same with the state(5) to be set 00:30:25.835 [2024-07-12 01:48:51.969791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256a5c0 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.969835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:25.835 [2024-07-12 01:48:51.969843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:25.835 [2024-07-12 01:48:51.969851] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:25.835 [2024-07-12 01:48:51.969899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.835 [2024-07-12 01:48:51.973419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fd0a0 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.973456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245aaf0 (9): Bad file descriptor 00:30:25.835 [2024-07-12 01:48:51.973586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.835 [2024-07-12 01:48:51.973981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.835 [2024-07-12 01:48:51.973988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.973997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.974672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.974680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd00 is same with the state(5) to be set 00:30:25.836 [2024-07-12 01:48:51.975962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.975979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.975992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.836 [2024-07-12 01:48:51.976001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.836 [2024-07-12 01:48:51.976012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.837 [2024-07-12 01:48:51.976749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.837 [2024-07-12 01:48:51.976757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.976982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.976992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.977000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.977009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.977016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.977026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.977033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.977043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.977050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.977060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.977067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.977077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.977083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.977092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255aaa0 is same with the state(5) to be set 00:30:25.838 [2024-07-12 01:48:51.978358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.838 [2024-07-12 01:48:51.978645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.838 [2024-07-12 01:48:51.978654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.978986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.978995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.839 [2024-07-12 01:48:51.979364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.839 [2024-07-12 01:48:51.979374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.979381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.979391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.979398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.979408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.979415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.979425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.979432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.979442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.979449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.979458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.979467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.979476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566b10 is same with the state(5) to be set 00:30:25.840 [2024-07-12 01:48:51.980749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.980987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.980997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.840 [2024-07-12 01:48:51.981352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.840 [2024-07-12 01:48:51.981361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.981846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.981854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238db70 is same with the state(5) to be set 00:30:25.841 [2024-07-12 01:48:51.983120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.841 [2024-07-12 01:48:51.983409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.841 [2024-07-12 01:48:51.983418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.983986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.983995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.984004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.984013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.984020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.984030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.842 [2024-07-12 01:48:51.984037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.842 [2024-07-12 01:48:51.984046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.984278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.984287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239b440 is same with the state(5) to be set 00:30:25.843 [2024-07-12 01:48:51.985550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.985985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.985993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.986002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.986010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.986020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.843 [2024-07-12 01:48:51.986027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.843 [2024-07-12 01:48:51.986037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.986644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.986652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239c960 is same with the state(5) to be set 00:30:25.844 [2024-07-12 01:48:51.987933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.987948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.987962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.987971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.987982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.987991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.988002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.988011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.988020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.988028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.988037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.844 [2024-07-12 01:48:51.988044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.844 [2024-07-12 01:48:51.988057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.845 [2024-07-12 01:48:51.988787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.845 [2024-07-12 01:48:51.988794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.988986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.988993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.989003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.989010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.989020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.989027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.989036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.989044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.989052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2425250 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.990562] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.846 [2024-07-12 01:48:51.990586] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:25.846 [2024-07-12 01:48:51.990597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:25.846 [2024-07-12 01:48:51.990652] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.846 [2024-07-12 01:48:51.990670] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.846 [2024-07-12 01:48:51.990684] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.846 [2024-07-12 01:48:51.990701] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.846 [2024-07-12 01:48:51.990789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:25.846 [2024-07-12 01:48:51.990801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:25.846 [2024-07-12 01:48:51.990810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:25.846 [2024-07-12 01:48:51.990824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:25.846 [2024-07-12 01:48:51.991283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.991299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a0740 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.991308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0740 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.991655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.991665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x256af50 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.991672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256af50 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.991894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.991905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cde60 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.991912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cde60 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.993710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:25.846 [2024-07-12 01:48:51.994102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.994114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9f80 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.994122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9f80 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.994478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.994490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9a610 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.994498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9a610 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.994846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.994856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2404950 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.994864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404950 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.995223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.846 [2024-07-12 01:48:51.995240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2403b70 with addr=10.0.0.2, port=4420 00:30:25.846 [2024-07-12 01:48:51.995248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403b70 is same with the state(5) to be set 00:30:25.846 [2024-07-12 01:48:51.995259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0740 (9): Bad file descriptor 00:30:25.846 [2024-07-12 01:48:51.995269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256af50 (9): Bad file descriptor 00:30:25.846 [2024-07-12 01:48:51.995278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cde60 (9): Bad file descriptor 00:30:25.846 [2024-07-12 01:48:51.995354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.995365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.995377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.995397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.995405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.995415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.995422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.995431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.995438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.846 [2024-07-12 01:48:51.995447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.846 [2024-07-12 01:48:51.995455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.995995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.847 [2024-07-12 01:48:51.996156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.847 [2024-07-12 01:48:51.996166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.996430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.996438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e6c640 is same with the state(5) to be set 00:30:25.848 [2024-07-12 01:48:51.997698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.997991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.997998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.848 [2024-07-12 01:48:51.998158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.848 [2024-07-12 01:48:51.998168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.849 [2024-07-12 01:48:51.998831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.849 [2024-07-12 01:48:51.998839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423d20 is same with the state(5) to be set 00:30:25.849 [2024-07-12 01:48:52.000311] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:25.849 task offset: 24576 on job bdev=Nvme5n1 fails 00:30:25.849 00:30:25.849 Latency(us) 00:30:25.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.849 Job: Nvme1n1 ended in about 0.94 seconds with error 00:30:25.849 Verification LBA range: start 0x0 length 0x400 00:30:25.849 Nvme1n1 : 0.94 204.84 12.80 68.28 0.00 231671.25 21189.97 222822.40 00:30:25.849 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.849 Job: Nvme2n1 ended in about 0.94 seconds with error 00:30:25.849 Verification LBA range: start 0x0 length 0x400 00:30:25.849 Nvme2n1 : 0.94 136.21 8.51 68.10 0.00 303503.64 18677.76 267386.88 00:30:25.849 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.849 Job: Nvme3n1 ended in about 0.94 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme3n1 : 0.94 203.80 12.74 67.93 0.00 223382.29 12724.91 241172.48 00:30:25.850 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme4n1 ended in about 0.94 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme4n1 : 0.94 203.29 12.71 67.76 0.00 219193.28 9830.40 253405.87 00:30:25.850 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme5n1 ended in about 0.92 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme5n1 : 0.92 208.69 13.04 69.56 0.00 208348.75 2143.57 244667.73 00:30:25.850 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme6n1 ended in about 0.95 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme6n1 : 0.95 135.18 8.45 67.59 0.00 280626.63 18568.53 269134.51 00:30:25.850 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme7n1 ended in about 0.95 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme7n1 : 0.95 134.84 8.43 67.42 0.00 275085.37 20753.07 249910.61 00:30:25.850 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme8n1 ended in about 0.96 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme8n1 : 0.96 200.20 12.51 66.73 0.00 203903.79 21408.43 244667.73 00:30:25.850 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme9n1 ended in about 0.96 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme9n1 : 0.96 133.14 8.32 66.57 0.00 266528.71 22391.47 251658.24 00:30:25.850 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:25.850 Job: Nvme10n1 ended in about 0.95 seconds with error 00:30:25.850 Verification LBA range: start 0x0 length 0x400 00:30:25.850 Nvme10n1 : 0.95 201.75 12.61 67.25 0.00 192612.48 21299.20 242920.11 00:30:25.850 =================================================================================================================== 00:30:25.850 Total : 1761.92 110.12 677.20 0.00 235935.57 2143.57 269134.51 00:30:25.850 [2024-07-12 01:48:52.024124] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:25.850 [2024-07-12 01:48:52.024157] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:25.850 [2024-07-12 01:48:52.024569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.850 [2024-07-12 01:48:52.024586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x256a5c0 with addr=10.0.0.2, port=4420 00:30:25.850 [2024-07-12 01:48:52.024596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256a5c0 is same with the state(5) to be set 00:30:25.850 [2024-07-12 01:48:52.024609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9f80 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.024620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9a610 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.024630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2404950 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.024640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403b70 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.024648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.024656] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.024664] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.850 [2024-07-12 01:48:52.024679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.024685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.024692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:25.850 [2024-07-12 01:48:52.024703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.024709] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.024717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:25.850 [2024-07-12 01:48:52.024842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.024853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.024860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.025258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.850 [2024-07-12 01:48:52.025271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245aaf0 with addr=10.0.0.2, port=4420 00:30:25.850 [2024-07-12 01:48:52.025279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245aaf0 is same with the state(5) to be set 00:30:25.850 [2024-07-12 01:48:52.025637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.850 [2024-07-12 01:48:52.025647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fd0a0 with addr=10.0.0.2, port=4420 00:30:25.850 [2024-07-12 01:48:52.025655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fd0a0 is same with the state(5) to be set 00:30:25.850 [2024-07-12 01:48:52.025665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256a5c0 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.025674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.025680] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.025692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:25.850 [2024-07-12 01:48:52.025703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.025709] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.025716] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:25.850 [2024-07-12 01:48:52.025726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.025733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.025740] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:25.850 [2024-07-12 01:48:52.025751] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.025757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.025763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:25.850 [2024-07-12 01:48:52.025801] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.850 [2024-07-12 01:48:52.025813] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.850 [2024-07-12 01:48:52.025823] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.850 [2024-07-12 01:48:52.025834] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.850 [2024-07-12 01:48:52.025844] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:25.850 [2024-07-12 01:48:52.026396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.026408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.026415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.026421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.026439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245aaf0 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.026448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fd0a0 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.026457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.026463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.026470] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:25.850 [2024-07-12 01:48:52.026821] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:25.850 [2024-07-12 01:48:52.026836] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:25.850 [2024-07-12 01:48:52.026845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.850 [2024-07-12 01:48:52.026854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.026879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.026886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.026896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:25.850 [2024-07-12 01:48:52.026906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:25.850 [2024-07-12 01:48:52.026912] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:25.850 [2024-07-12 01:48:52.026919] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:25.850 [2024-07-12 01:48:52.027390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.027401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.850 [2024-07-12 01:48:52.027622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.850 [2024-07-12 01:48:52.027636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cde60 with addr=10.0.0.2, port=4420 00:30:25.850 [2024-07-12 01:48:52.027644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cde60 is same with the state(5) to be set 00:30:25.850 [2024-07-12 01:48:52.027982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.850 [2024-07-12 01:48:52.027993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x256af50 with addr=10.0.0.2, port=4420 00:30:25.850 [2024-07-12 01:48:52.028000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256af50 is same with the state(5) to be set 00:30:25.850 [2024-07-12 01:48:52.028329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.850 [2024-07-12 01:48:52.028341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a0740 with addr=10.0.0.2, port=4420 00:30:25.850 [2024-07-12 01:48:52.028348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0740 is same with the state(5) to be set 00:30:25.850 [2024-07-12 01:48:52.028379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cde60 (9): Bad file descriptor 00:30:25.850 [2024-07-12 01:48:52.028390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256af50 (9): Bad file descriptor 00:30:25.851 [2024-07-12 01:48:52.028399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0740 (9): Bad file descriptor 00:30:25.851 [2024-07-12 01:48:52.028431] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:25.851 [2024-07-12 01:48:52.028440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:25.851 [2024-07-12 01:48:52.028447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:25.851 [2024-07-12 01:48:52.028457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:25.851 [2024-07-12 01:48:52.028463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:25.851 [2024-07-12 01:48:52.028470] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:25.851 [2024-07-12 01:48:52.028479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.851 [2024-07-12 01:48:52.028485] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.851 [2024-07-12 01:48:52.028492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.851 [2024-07-12 01:48:52.028521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.851 [2024-07-12 01:48:52.028529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.851 [2024-07-12 01:48:52.028535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.111 01:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:26.111 01:48:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4132681 00:30:27.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4132681) - No such process 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.110 rmmod nvme_tcp 00:30:27.110 rmmod nvme_fabrics 00:30:27.110 rmmod nvme_keyring 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.110 01:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.017 01:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:29.017 00:30:29.017 real 0m7.746s 00:30:29.017 user 0m18.805s 00:30:29.017 sys 0m1.199s 00:30:29.017 01:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:29.017 01:48:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:29.017 ************************************ 00:30:29.017 END TEST nvmf_shutdown_tc3 00:30:29.017 ************************************ 00:30:29.278 01:48:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:29.278 00:30:29.278 real 0m33.467s 00:30:29.278 user 1m15.843s 00:30:29.278 sys 0m10.140s 00:30:29.278 01:48:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:29.279 01:48:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.279 ************************************ 00:30:29.279 END TEST nvmf_shutdown 00:30:29.279 ************************************ 00:30:29.279 01:48:55 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.279 01:48:55 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.279 01:48:55 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:29.279 01:48:55 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:29.279 01:48:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.279 ************************************ 00:30:29.279 START TEST nvmf_multicontroller 00:30:29.279 ************************************ 00:30:29.279 01:48:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:29.279 * Looking for test storage... 00:30:29.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:29.279 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.279 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:29.540 01:48:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.777 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:37.778 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:37.778 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:37.778 Found net devices under 0000:31:00.0: cvl_0_0 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:37.778 Found net devices under 0000:31:00.1: cvl_0_1 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:37.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:30:37.778 00:30:37.778 --- 10.0.0.2 ping statistics --- 00:30:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.778 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:30:37.778 00:30:37.778 --- 10.0.0.1 ping statistics --- 00:30:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.778 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.778 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4138157 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4138157 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4138157 ']' 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:37.779 01:49:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.779 [2024-07-12 01:49:04.008964] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:37.779 [2024-07-12 01:49:04.009034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.779 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.779 [2024-07-12 01:49:04.106847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:38.039 [2024-07-12 01:49:04.153969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.039 [2024-07-12 01:49:04.154026] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.039 [2024-07-12 01:49:04.154035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.039 [2024-07-12 01:49:04.154042] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.039 [2024-07-12 01:49:04.154048] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.039 [2024-07-12 01:49:04.154173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.039 [2024-07-12 01:49:04.154309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:38.039 [2024-07-12 01:49:04.154501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.608 [2024-07-12 01:49:04.837696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.608 Malloc0 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.608 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.609 [2024-07-12 01:49:04.911670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.609 [2024-07-12 01:49:04.923628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.609 Malloc1 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.609 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4138263 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4138263 /var/tmp/bdevperf.sock 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4138263 ']' 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:38.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:38.869 01:49:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.810 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.810 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:30:39.810 01:49:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:39.810 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.811 NVMe0n1 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.811 1 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.811 request: 00:30:39.811 { 00:30:39.811 "name": "NVMe0", 00:30:39.811 "trtype": "tcp", 00:30:39.811 "traddr": "10.0.0.2", 00:30:39.811 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:39.811 "hostaddr": "10.0.0.2", 00:30:39.811 "hostsvcid": "60000", 00:30:39.811 "adrfam": "ipv4", 00:30:39.811 "trsvcid": "4420", 00:30:39.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.811 "method": "bdev_nvme_attach_controller", 00:30:39.811 "req_id": 1 00:30:39.811 } 00:30:39.811 Got JSON-RPC error response 00:30:39.811 response: 00:30:39.811 { 00:30:39.811 "code": -114, 00:30:39.811 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:39.811 } 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.811 request: 00:30:39.811 { 00:30:39.811 "name": "NVMe0", 00:30:39.811 "trtype": "tcp", 00:30:39.811 "traddr": "10.0.0.2", 00:30:39.811 "hostaddr": "10.0.0.2", 00:30:39.811 "hostsvcid": "60000", 00:30:39.811 "adrfam": "ipv4", 00:30:39.811 "trsvcid": "4420", 00:30:39.811 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:39.811 "method": "bdev_nvme_attach_controller", 00:30:39.811 "req_id": 1 00:30:39.811 } 00:30:39.811 Got JSON-RPC error response 00:30:39.811 response: 00:30:39.811 { 00:30:39.811 "code": -114, 00:30:39.811 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:39.811 } 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.811 01:49:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.811 request: 00:30:39.811 { 00:30:39.811 "name": "NVMe0", 00:30:39.811 "trtype": "tcp", 00:30:39.811 "traddr": "10.0.0.2", 00:30:39.811 "hostaddr": "10.0.0.2", 00:30:39.811 "hostsvcid": "60000", 00:30:39.811 "adrfam": "ipv4", 00:30:39.811 "trsvcid": "4420", 00:30:39.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.811 "multipath": "disable", 00:30:39.811 "method": "bdev_nvme_attach_controller", 00:30:39.811 "req_id": 1 00:30:39.811 } 00:30:39.811 Got JSON-RPC error response 00:30:39.811 response: 00:30:39.811 { 00:30:39.811 "code": -114, 00:30:39.811 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:39.811 } 00:30:39.811 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.812 request: 00:30:39.812 { 00:30:39.812 "name": "NVMe0", 00:30:39.812 "trtype": "tcp", 00:30:39.812 "traddr": "10.0.0.2", 00:30:39.812 "hostaddr": "10.0.0.2", 00:30:39.812 "hostsvcid": "60000", 00:30:39.812 "adrfam": "ipv4", 00:30:39.812 "trsvcid": "4420", 00:30:39.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.812 "multipath": "failover", 00:30:39.812 "method": "bdev_nvme_attach_controller", 00:30:39.812 "req_id": 1 00:30:39.812 } 00:30:39.812 Got JSON-RPC error response 00:30:39.812 response: 00:30:39.812 { 00:30:39.812 "code": -114, 00:30:39.812 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:39.812 } 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.812 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.071 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.071 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:40.071 01:49:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:41.451 0 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4138263 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4138263 ']' 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4138263 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4138263 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4138263' 00:30:41.451 killing process with pid 4138263 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4138263 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4138263 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:30:41.451 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:41.451 [2024-07-12 01:49:05.050402] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:41.451 [2024-07-12 01:49:05.050483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138263 ] 00:30:41.451 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.451 [2024-07-12 01:49:05.118667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.451 [2024-07-12 01:49:05.150249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.451 [2024-07-12 01:49:06.340378] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 87e79b9b-4fed-4a1b-b794-2a6bc322033d already exists 00:30:41.451 [2024-07-12 01:49:06.340410] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:87e79b9b-4fed-4a1b-b794-2a6bc322033d alias for bdev NVMe1n1 00:30:41.451 [2024-07-12 01:49:06.340420] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:41.451 Running I/O for 1 seconds... 00:30:41.451 00:30:41.451 Latency(us) 00:30:41.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.451 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:41.451 NVMe0n1 : 1.00 28730.63 112.23 0.00 0.00 4444.88 2116.27 17585.49 00:30:41.451 =================================================================================================================== 00:30:41.451 Total : 28730.63 112.23 0.00 0.00 4444.88 2116.27 17585.49 00:30:41.451 Received shutdown signal, test time was about 1.000000 seconds 00:30:41.451 00:30:41.451 Latency(us) 00:30:41.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.451 =================================================================================================================== 00:30:41.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:41.451 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.451 rmmod nvme_tcp 00:30:41.451 rmmod nvme_fabrics 00:30:41.451 rmmod nvme_keyring 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:41.451 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4138157 ']' 00:30:41.452 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4138157 00:30:41.452 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4138157 ']' 00:30:41.452 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4138157 00:30:41.452 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:30:41.452 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:41.452 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4138157 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4138157' 00:30:41.711 killing process with pid 4138157 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4138157 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4138157 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.711 01:49:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.254 01:49:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:44.254 00:30:44.254 real 0m14.500s 00:30:44.254 user 0m16.654s 00:30:44.254 sys 0m6.890s 00:30:44.254 01:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:44.254 01:49:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.254 ************************************ 00:30:44.254 END TEST nvmf_multicontroller 00:30:44.254 ************************************ 00:30:44.254 01:49:10 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:44.254 01:49:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:44.254 01:49:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:44.254 01:49:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:44.254 ************************************ 00:30:44.254 START TEST nvmf_aer 00:30:44.254 ************************************ 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:44.254 * Looking for test storage... 00:30:44.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.254 01:49:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:52.397 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:52.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:52.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:52.398 Found net devices under 0000:31:00.0: cvl_0_0 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:52.398 Found net devices under 0000:31:00.1: cvl_0_1 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:52.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:30:52.398 00:30:52.398 --- 10.0.0.2 ping statistics --- 00:30:52.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.398 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:30:52.398 00:30:52.398 --- 10.0.0.1 ping statistics --- 00:30:52.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.398 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4143542 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4143542 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 4143542 ']' 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:52.398 01:49:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.398 [2024-07-12 01:49:18.518837] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:52.398 [2024-07-12 01:49:18.518901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.398 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.398 [2024-07-12 01:49:18.598360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:52.398 [2024-07-12 01:49:18.638647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.398 [2024-07-12 01:49:18.638689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.398 [2024-07-12 01:49:18.638697] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.398 [2024-07-12 01:49:18.638704] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.398 [2024-07-12 01:49:18.638710] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.398 [2024-07-12 01:49:18.638852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.398 [2024-07-12 01:49:18.638974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.398 [2024-07-12 01:49:18.639133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.398 [2024-07-12 01:49:18.639134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.968 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:52.968 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:30:52.968 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.968 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.968 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 [2024-07-12 01:49:19.343871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 Malloc0 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 [2024-07-12 01:49:19.403201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.229 [ 00:30:53.229 { 00:30:53.229 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:53.229 "subtype": "Discovery", 00:30:53.229 "listen_addresses": [], 00:30:53.229 "allow_any_host": true, 00:30:53.229 "hosts": [] 00:30:53.229 }, 00:30:53.229 { 00:30:53.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.229 "subtype": "NVMe", 00:30:53.229 "listen_addresses": [ 00:30:53.229 { 00:30:53.229 "trtype": "TCP", 00:30:53.229 "adrfam": "IPv4", 00:30:53.229 "traddr": "10.0.0.2", 00:30:53.229 "trsvcid": "4420" 00:30:53.229 } 00:30:53.229 ], 00:30:53.229 "allow_any_host": true, 00:30:53.229 "hosts": [], 00:30:53.229 "serial_number": "SPDK00000000000001", 00:30:53.229 "model_number": "SPDK bdev Controller", 00:30:53.229 "max_namespaces": 2, 00:30:53.229 "min_cntlid": 1, 00:30:53.229 "max_cntlid": 65519, 00:30:53.229 "namespaces": [ 00:30:53.229 { 00:30:53.229 "nsid": 1, 00:30:53.229 "bdev_name": "Malloc0", 00:30:53.229 "name": "Malloc0", 00:30:53.229 "nguid": "DC45831434C64A7692CBD2BD3C20D4AF", 00:30:53.229 "uuid": "dc458314-34c6-4a76-92cb-d2bd3c20d4af" 00:30:53.229 } 00:30:53.229 ] 00:30:53.229 } 00:30:53.229 ] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4143649 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:53.229 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:30:53.229 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.491 Malloc1 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.491 Asynchronous Event Request test 00:30:53.491 Attaching to 10.0.0.2 00:30:53.491 Attached to 10.0.0.2 00:30:53.491 Registering asynchronous event callbacks... 00:30:53.491 Starting namespace attribute notice tests for all controllers... 00:30:53.491 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:53.491 aer_cb - Changed Namespace 00:30:53.491 Cleaning up... 00:30:53.491 [ 00:30:53.491 { 00:30:53.491 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:53.491 "subtype": "Discovery", 00:30:53.491 "listen_addresses": [], 00:30:53.491 "allow_any_host": true, 00:30:53.491 "hosts": [] 00:30:53.491 }, 00:30:53.491 { 00:30:53.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.491 "subtype": "NVMe", 00:30:53.491 "listen_addresses": [ 00:30:53.491 { 00:30:53.491 "trtype": "TCP", 00:30:53.491 "adrfam": "IPv4", 00:30:53.491 "traddr": "10.0.0.2", 00:30:53.491 "trsvcid": "4420" 00:30:53.491 } 00:30:53.491 ], 00:30:53.491 "allow_any_host": true, 00:30:53.491 "hosts": [], 00:30:53.491 "serial_number": "SPDK00000000000001", 00:30:53.491 "model_number": "SPDK bdev Controller", 00:30:53.491 "max_namespaces": 2, 00:30:53.491 "min_cntlid": 1, 00:30:53.491 "max_cntlid": 65519, 00:30:53.491 "namespaces": [ 00:30:53.491 { 00:30:53.491 "nsid": 1, 00:30:53.491 "bdev_name": "Malloc0", 00:30:53.491 "name": "Malloc0", 00:30:53.491 "nguid": "DC45831434C64A7692CBD2BD3C20D4AF", 00:30:53.491 "uuid": "dc458314-34c6-4a76-92cb-d2bd3c20d4af" 00:30:53.491 }, 00:30:53.491 { 00:30:53.491 "nsid": 2, 00:30:53.491 "bdev_name": "Malloc1", 00:30:53.491 "name": "Malloc1", 00:30:53.491 "nguid": "147D58CECC0C4665BAE64828E6696130", 00:30:53.491 "uuid": "147d58ce-cc0c-4665-bae6-4828e6696130" 00:30:53.491 } 00:30:53.491 ] 00:30:53.491 } 00:30:53.491 ] 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4143649 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.491 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:53.752 rmmod nvme_tcp 00:30:53.752 rmmod nvme_fabrics 00:30:53.752 rmmod nvme_keyring 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4143542 ']' 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4143542 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 4143542 ']' 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 4143542 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4143542 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4143542' 00:30:53.752 killing process with pid 4143542 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 4143542 00:30:53.752 01:49:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 4143542 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:54.013 01:49:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.923 01:49:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:55.923 00:30:55.923 real 0m12.075s 00:30:55.923 user 0m8.318s 00:30:55.923 sys 0m6.448s 00:30:55.923 01:49:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:55.923 01:49:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:55.923 ************************************ 00:30:55.923 END TEST nvmf_aer 00:30:55.923 ************************************ 00:30:55.923 01:49:22 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:55.923 01:49:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:55.923 01:49:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:55.923 01:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.923 ************************************ 00:30:55.923 START TEST nvmf_async_init 00:30:55.923 ************************************ 00:30:55.923 01:49:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:56.184 * Looking for test storage... 00:30:56.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.184 01:49:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e949a62940ab420db2fe744d0f3d4f1d 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:56.185 01:49:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:04.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:04.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:04.323 Found net devices under 0000:31:00.0: cvl_0_0 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:04.323 Found net devices under 0000:31:00.1: cvl_0_1 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.323 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:04.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:31:04.324 00:31:04.324 --- 10.0.0.2 ping statistics --- 00:31:04.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.324 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:31:04.324 00:31:04.324 --- 10.0.0.1 ping statistics --- 00:31:04.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.324 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4148348 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4148348 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 4148348 ']' 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.324 01:49:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:04.324 [2024-07-12 01:49:30.536757] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:04.324 [2024-07-12 01:49:30.536813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.324 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.324 [2024-07-12 01:49:30.610847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.324 [2024-07-12 01:49:30.644194] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.324 [2024-07-12 01:49:30.644253] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.324 [2024-07-12 01:49:30.644262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.324 [2024-07-12 01:49:30.644269] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.324 [2024-07-12 01:49:30.644274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.324 [2024-07-12 01:49:30.644293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 [2024-07-12 01:49:31.328801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 null0 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e949a62940ab420db2fe744d0f3d4f1d 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 [2024-07-12 01:49:31.369001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 nvme0n1 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 [ 00:31:05.264 { 00:31:05.264 "name": "nvme0n1", 00:31:05.264 "aliases": [ 00:31:05.264 "e949a629-40ab-420d-b2fe-744d0f3d4f1d" 00:31:05.264 ], 00:31:05.264 "product_name": "NVMe disk", 00:31:05.264 "block_size": 512, 00:31:05.264 "num_blocks": 2097152, 00:31:05.264 "uuid": "e949a629-40ab-420d-b2fe-744d0f3d4f1d", 00:31:05.264 "assigned_rate_limits": { 00:31:05.264 "rw_ios_per_sec": 0, 00:31:05.264 "rw_mbytes_per_sec": 0, 00:31:05.264 "r_mbytes_per_sec": 0, 00:31:05.264 "w_mbytes_per_sec": 0 00:31:05.264 }, 00:31:05.264 "claimed": false, 00:31:05.264 "zoned": false, 00:31:05.264 "supported_io_types": { 00:31:05.264 "read": true, 00:31:05.264 "write": true, 00:31:05.264 "unmap": false, 00:31:05.264 "write_zeroes": true, 00:31:05.264 "flush": true, 00:31:05.264 "reset": true, 00:31:05.264 "compare": true, 00:31:05.264 "compare_and_write": true, 00:31:05.264 "abort": true, 00:31:05.264 "nvme_admin": true, 00:31:05.264 "nvme_io": true 00:31:05.264 }, 00:31:05.264 "memory_domains": [ 00:31:05.264 { 00:31:05.264 "dma_device_id": "system", 00:31:05.264 "dma_device_type": 1 00:31:05.264 } 00:31:05.264 ], 00:31:05.264 "driver_specific": { 00:31:05.264 "nvme": [ 00:31:05.264 { 00:31:05.264 "trid": { 00:31:05.264 "trtype": "TCP", 00:31:05.264 "adrfam": "IPv4", 00:31:05.264 "traddr": "10.0.0.2", 00:31:05.264 "trsvcid": "4420", 00:31:05.264 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:05.264 }, 00:31:05.264 "ctrlr_data": { 00:31:05.264 "cntlid": 1, 00:31:05.264 "vendor_id": "0x8086", 00:31:05.264 "model_number": "SPDK bdev Controller", 00:31:05.264 "serial_number": "00000000000000000000", 00:31:05.264 "firmware_revision": "24.05.1", 00:31:05.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.264 "oacs": { 00:31:05.264 "security": 0, 00:31:05.264 "format": 0, 00:31:05.264 "firmware": 0, 00:31:05.264 "ns_manage": 0 00:31:05.264 }, 00:31:05.264 "multi_ctrlr": true, 00:31:05.264 "ana_reporting": false 00:31:05.264 }, 00:31:05.264 "vs": { 00:31:05.264 "nvme_version": "1.3" 00:31:05.264 }, 00:31:05.264 "ns_data": { 00:31:05.264 "id": 1, 00:31:05.264 "can_share": true 00:31:05.264 } 00:31:05.264 } 00:31:05.264 ], 00:31:05.264 "mp_policy": "active_passive" 00:31:05.264 } 00:31:05.264 } 00:31:05.264 ] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.264 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.264 [2024-07-12 01:49:31.617519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.264 [2024-07-12 01:49:31.617579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98e600 (9): Bad file descriptor 00:31:05.524 [2024-07-12 01:49:31.751320] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.524 [ 00:31:05.524 { 00:31:05.524 "name": "nvme0n1", 00:31:05.524 "aliases": [ 00:31:05.524 "e949a629-40ab-420d-b2fe-744d0f3d4f1d" 00:31:05.524 ], 00:31:05.524 "product_name": "NVMe disk", 00:31:05.524 "block_size": 512, 00:31:05.524 "num_blocks": 2097152, 00:31:05.524 "uuid": "e949a629-40ab-420d-b2fe-744d0f3d4f1d", 00:31:05.524 "assigned_rate_limits": { 00:31:05.524 "rw_ios_per_sec": 0, 00:31:05.524 "rw_mbytes_per_sec": 0, 00:31:05.524 "r_mbytes_per_sec": 0, 00:31:05.524 "w_mbytes_per_sec": 0 00:31:05.524 }, 00:31:05.524 "claimed": false, 00:31:05.524 "zoned": false, 00:31:05.524 "supported_io_types": { 00:31:05.524 "read": true, 00:31:05.524 "write": true, 00:31:05.524 "unmap": false, 00:31:05.524 "write_zeroes": true, 00:31:05.524 "flush": true, 00:31:05.524 "reset": true, 00:31:05.524 "compare": true, 00:31:05.524 "compare_and_write": true, 00:31:05.524 "abort": true, 00:31:05.524 "nvme_admin": true, 00:31:05.524 "nvme_io": true 00:31:05.524 }, 00:31:05.524 "memory_domains": [ 00:31:05.524 { 00:31:05.524 "dma_device_id": "system", 00:31:05.524 "dma_device_type": 1 00:31:05.524 } 00:31:05.524 ], 00:31:05.524 "driver_specific": { 00:31:05.524 "nvme": [ 00:31:05.524 { 00:31:05.524 "trid": { 00:31:05.524 "trtype": "TCP", 00:31:05.524 "adrfam": "IPv4", 00:31:05.524 "traddr": "10.0.0.2", 00:31:05.524 "trsvcid": "4420", 00:31:05.524 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:05.524 }, 00:31:05.524 "ctrlr_data": { 00:31:05.524 "cntlid": 2, 00:31:05.524 "vendor_id": "0x8086", 00:31:05.524 "model_number": "SPDK bdev Controller", 00:31:05.524 "serial_number": "00000000000000000000", 00:31:05.524 "firmware_revision": "24.05.1", 00:31:05.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.524 "oacs": { 00:31:05.524 "security": 0, 00:31:05.524 "format": 0, 00:31:05.524 "firmware": 0, 00:31:05.524 "ns_manage": 0 00:31:05.524 }, 00:31:05.524 "multi_ctrlr": true, 00:31:05.524 "ana_reporting": false 00:31:05.524 }, 00:31:05.524 "vs": { 00:31:05.524 "nvme_version": "1.3" 00:31:05.524 }, 00:31:05.524 "ns_data": { 00:31:05.524 "id": 1, 00:31:05.524 "can_share": true 00:31:05.524 } 00:31:05.524 } 00:31:05.524 ], 00:31:05.524 "mp_policy": "active_passive" 00:31:05.524 } 00:31:05.524 } 00:31:05.524 ] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5WhTPBD771 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5WhTPBD771 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.524 [2024-07-12 01:49:31.806103] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:05.524 [2024-07-12 01:49:31.806213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5WhTPBD771 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.524 [2024-07-12 01:49:31.814118] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5WhTPBD771 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.524 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.524 [2024-07-12 01:49:31.822143] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:05.524 [2024-07-12 01:49:31.822177] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:05.784 nvme0n1 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.784 [ 00:31:05.784 { 00:31:05.784 "name": "nvme0n1", 00:31:05.784 "aliases": [ 00:31:05.784 "e949a629-40ab-420d-b2fe-744d0f3d4f1d" 00:31:05.784 ], 00:31:05.784 "product_name": "NVMe disk", 00:31:05.784 "block_size": 512, 00:31:05.784 "num_blocks": 2097152, 00:31:05.784 "uuid": "e949a629-40ab-420d-b2fe-744d0f3d4f1d", 00:31:05.784 "assigned_rate_limits": { 00:31:05.784 "rw_ios_per_sec": 0, 00:31:05.784 "rw_mbytes_per_sec": 0, 00:31:05.784 "r_mbytes_per_sec": 0, 00:31:05.784 "w_mbytes_per_sec": 0 00:31:05.784 }, 00:31:05.784 "claimed": false, 00:31:05.784 "zoned": false, 00:31:05.784 "supported_io_types": { 00:31:05.784 "read": true, 00:31:05.784 "write": true, 00:31:05.784 "unmap": false, 00:31:05.784 "write_zeroes": true, 00:31:05.784 "flush": true, 00:31:05.784 "reset": true, 00:31:05.784 "compare": true, 00:31:05.784 "compare_and_write": true, 00:31:05.784 "abort": true, 00:31:05.784 "nvme_admin": true, 00:31:05.784 "nvme_io": true 00:31:05.784 }, 00:31:05.784 "memory_domains": [ 00:31:05.784 { 00:31:05.784 "dma_device_id": "system", 00:31:05.784 "dma_device_type": 1 00:31:05.784 } 00:31:05.784 ], 00:31:05.784 "driver_specific": { 00:31:05.784 "nvme": [ 00:31:05.784 { 00:31:05.784 "trid": { 00:31:05.784 "trtype": "TCP", 00:31:05.784 "adrfam": "IPv4", 00:31:05.784 "traddr": "10.0.0.2", 00:31:05.784 "trsvcid": "4421", 00:31:05.784 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:05.784 }, 00:31:05.784 "ctrlr_data": { 00:31:05.784 "cntlid": 3, 00:31:05.784 "vendor_id": "0x8086", 00:31:05.784 "model_number": "SPDK bdev Controller", 00:31:05.784 "serial_number": "00000000000000000000", 00:31:05.784 "firmware_revision": "24.05.1", 00:31:05.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.784 "oacs": { 00:31:05.784 "security": 0, 00:31:05.784 "format": 0, 00:31:05.784 "firmware": 0, 00:31:05.784 "ns_manage": 0 00:31:05.784 }, 00:31:05.784 "multi_ctrlr": true, 00:31:05.784 "ana_reporting": false 00:31:05.784 }, 00:31:05.784 "vs": { 00:31:05.784 "nvme_version": "1.3" 00:31:05.784 }, 00:31:05.784 "ns_data": { 00:31:05.784 "id": 1, 00:31:05.784 "can_share": true 00:31:05.784 } 00:31:05.784 } 00:31:05.784 ], 00:31:05.784 "mp_policy": "active_passive" 00:31:05.784 } 00:31:05.784 } 00:31:05.784 ] 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5WhTPBD771 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.784 rmmod nvme_tcp 00:31:05.784 rmmod nvme_fabrics 00:31:05.784 rmmod nvme_keyring 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4148348 ']' 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4148348 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 4148348 ']' 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 4148348 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:05.784 01:49:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4148348 00:31:05.784 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:05.784 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:05.784 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4148348' 00:31:05.784 killing process with pid 4148348 00:31:05.784 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 4148348 00:31:05.784 [2024-07-12 01:49:32.039035] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:05.784 [2024-07-12 01:49:32.039062] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:05.784 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 4148348 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:06.045 01:49:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.954 01:49:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:07.954 00:31:07.954 real 0m11.946s 00:31:07.954 user 0m4.030s 00:31:07.954 sys 0m6.267s 00:31:07.954 01:49:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:07.954 01:49:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:07.954 ************************************ 00:31:07.954 END TEST nvmf_async_init 00:31:07.954 ************************************ 00:31:07.954 01:49:34 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:07.954 01:49:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:07.954 01:49:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:07.954 01:49:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:07.954 ************************************ 00:31:07.954 START TEST dma 00:31:07.954 ************************************ 00:31:07.954 01:49:34 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:08.215 * Looking for test storage... 00:31:08.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.215 01:49:34 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.215 01:49:34 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.215 01:49:34 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.215 01:49:34 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.215 01:49:34 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.215 01:49:34 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.215 01:49:34 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.215 01:49:34 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:31:08.215 01:49:34 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.215 01:49:34 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.215 01:49:34 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:08.215 01:49:34 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:31:08.215 00:31:08.215 real 0m0.138s 00:31:08.215 user 0m0.063s 00:31:08.215 sys 0m0.082s 00:31:08.215 01:49:34 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.215 01:49:34 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.215 ************************************ 00:31:08.215 END TEST dma 00:31:08.215 ************************************ 00:31:08.215 01:49:34 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:08.215 01:49:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:08.215 01:49:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.215 01:49:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.215 ************************************ 00:31:08.215 START TEST nvmf_identify 00:31:08.215 ************************************ 00:31:08.215 01:49:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:08.476 * Looking for test storage... 00:31:08.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:08.476 01:49:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.612 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.612 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.612 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.612 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.612 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.612 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:16.613 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:16.613 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:16.613 Found net devices under 0000:31:00.0: cvl_0_0 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:16.613 Found net devices under 0000:31:00.1: cvl_0_1 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:31:16.613 00:31:16.613 --- 10.0.0.2 ping statistics --- 00:31:16.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.613 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:31:16.613 00:31:16.613 --- 10.0.0.1 ping statistics --- 00:31:16.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.613 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4153391 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4153391 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 4153391 ']' 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:16.613 01:49:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.613 [2024-07-12 01:49:42.917557] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:16.613 [2024-07-12 01:49:42.917623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.613 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.876 [2024-07-12 01:49:42.997516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.876 [2024-07-12 01:49:43.038734] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.876 [2024-07-12 01:49:43.038778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.876 [2024-07-12 01:49:43.038786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.876 [2024-07-12 01:49:43.038793] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.876 [2024-07-12 01:49:43.038799] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.876 [2024-07-12 01:49:43.038938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.876 [2024-07-12 01:49:43.039056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.876 [2024-07-12 01:49:43.039192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.876 [2024-07-12 01:49:43.039193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.445 [2024-07-12 01:49:43.708784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.445 Malloc0 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.445 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.709 [2024-07-12 01:49:43.808394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.709 [ 00:31:17.709 { 00:31:17.709 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:17.709 "subtype": "Discovery", 00:31:17.709 "listen_addresses": [ 00:31:17.709 { 00:31:17.709 "trtype": "TCP", 00:31:17.709 "adrfam": "IPv4", 00:31:17.709 "traddr": "10.0.0.2", 00:31:17.709 "trsvcid": "4420" 00:31:17.709 } 00:31:17.709 ], 00:31:17.709 "allow_any_host": true, 00:31:17.709 "hosts": [] 00:31:17.709 }, 00:31:17.709 { 00:31:17.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.709 "subtype": "NVMe", 00:31:17.709 "listen_addresses": [ 00:31:17.709 { 00:31:17.709 "trtype": "TCP", 00:31:17.709 "adrfam": "IPv4", 00:31:17.709 "traddr": "10.0.0.2", 00:31:17.709 "trsvcid": "4420" 00:31:17.709 } 00:31:17.709 ], 00:31:17.709 "allow_any_host": true, 00:31:17.709 "hosts": [], 00:31:17.709 "serial_number": "SPDK00000000000001", 00:31:17.709 "model_number": "SPDK bdev Controller", 00:31:17.709 "max_namespaces": 32, 00:31:17.709 "min_cntlid": 1, 00:31:17.709 "max_cntlid": 65519, 00:31:17.709 "namespaces": [ 00:31:17.709 { 00:31:17.709 "nsid": 1, 00:31:17.709 "bdev_name": "Malloc0", 00:31:17.709 "name": "Malloc0", 00:31:17.709 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:17.709 "eui64": "ABCDEF0123456789", 00:31:17.709 "uuid": "a124c774-b459-4d02-add5-dd20f087e980" 00:31:17.709 } 00:31:17.709 ] 00:31:17.709 } 00:31:17.709 ] 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.709 01:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:17.709 [2024-07-12 01:49:43.864935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:17.710 [2024-07-12 01:49:43.864995] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153728 ] 00:31:17.710 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.710 [2024-07-12 01:49:43.896900] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:17.710 [2024-07-12 01:49:43.896950] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:17.710 [2024-07-12 01:49:43.896958] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:17.710 [2024-07-12 01:49:43.896970] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:17.710 [2024-07-12 01:49:43.896979] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:17.710 [2024-07-12 01:49:43.900255] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:17.710 [2024-07-12 01:49:43.900285] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd58fb0 0 00:31:17.710 [2024-07-12 01:49:43.900380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:17.710 [2024-07-12 01:49:43.900388] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:17.710 [2024-07-12 01:49:43.900392] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:17.710 [2024-07-12 01:49:43.900396] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:17.710 [2024-07-12 01:49:43.900428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.900434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.900439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.900452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:17.710 [2024-07-12 01:49:43.900465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.908238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.908247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.908251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.908266] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:17.710 [2024-07-12 01:49:43.908272] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:17.710 [2024-07-12 01:49:43.908277] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:17.710 [2024-07-12 01:49:43.908291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908298] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.908306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.908318] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.908389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.908395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.908398] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908402] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.908409] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:17.710 [2024-07-12 01:49:43.908417] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:17.710 [2024-07-12 01:49:43.908423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908427] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908430] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.908440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.908450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.908512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.908518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.908522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908525] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.908530] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:17.710 [2024-07-12 01:49:43.908538] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:17.710 [2024-07-12 01:49:43.908544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908548] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.908558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.908568] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.908635] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.908641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.908644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908648] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.908653] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:17.710 [2024-07-12 01:49:43.908662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908666] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.908676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.908686] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.908744] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.908750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.908753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.908761] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:17.710 [2024-07-12 01:49:43.908766] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:17.710 [2024-07-12 01:49:43.908774] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:17.710 [2024-07-12 01:49:43.908879] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:17.710 [2024-07-12 01:49:43.908883] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:17.710 [2024-07-12 01:49:43.908891] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908897] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908900] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.908907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.908917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.908985] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.908991] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.908995] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.908998] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.909003] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:17.710 [2024-07-12 01:49:43.909012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.909026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.909035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.909100] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.909106] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.909110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909113] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.909118] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:17.710 [2024-07-12 01:49:43.909122] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:17.710 [2024-07-12 01:49:43.909130] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:17.710 [2024-07-12 01:49:43.909142] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:17.710 [2024-07-12 01:49:43.909152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.909163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.710 [2024-07-12 01:49:43.909173] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.909272] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.710 [2024-07-12 01:49:43.909279] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.710 [2024-07-12 01:49:43.909283] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909287] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58fb0): datao=0, datal=4096, cccid=0 00:31:17.710 [2024-07-12 01:49:43.909291] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc6320) on tqpair(0xd58fb0): expected_datao=0, payload_size=4096 00:31:17.710 [2024-07-12 01:49:43.909296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909303] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909310] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.909375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.909378] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909382] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.909391] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:17.710 [2024-07-12 01:49:43.909396] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:17.710 [2024-07-12 01:49:43.909400] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:17.710 [2024-07-12 01:49:43.909405] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:17.710 [2024-07-12 01:49:43.909410] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:17.710 [2024-07-12 01:49:43.909414] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:17.710 [2024-07-12 01:49:43.909422] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:17.710 [2024-07-12 01:49:43.909428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909436] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.909443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.710 [2024-07-12 01:49:43.909453] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.710 [2024-07-12 01:49:43.909527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.710 [2024-07-12 01:49:43.909533] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.710 [2024-07-12 01:49:43.909536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6320) on tqpair=0xd58fb0 00:31:17.710 [2024-07-12 01:49:43.909547] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.909560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.710 [2024-07-12 01:49:43.909566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.909579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.710 [2024-07-12 01:49:43.909585] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.710 [2024-07-12 01:49:43.909592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd58fb0) 00:31:17.710 [2024-07-12 01:49:43.909597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.710 [2024-07-12 01:49:43.909603] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909609] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909612] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.909618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.711 [2024-07-12 01:49:43.909623] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:17.711 [2024-07-12 01:49:43.909632] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:17.711 [2024-07-12 01:49:43.909638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.909649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.711 [2024-07-12 01:49:43.909660] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6320, cid 0, qid 0 00:31:17.711 [2024-07-12 01:49:43.909665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6480, cid 1, qid 0 00:31:17.711 [2024-07-12 01:49:43.909670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc65e0, cid 2, qid 0 00:31:17.711 [2024-07-12 01:49:43.909674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.711 [2024-07-12 01:49:43.909679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc68a0, cid 4, qid 0 00:31:17.711 [2024-07-12 01:49:43.909789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.711 [2024-07-12 01:49:43.909795] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.711 [2024-07-12 01:49:43.909799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909802] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc68a0) on tqpair=0xd58fb0 00:31:17.711 [2024-07-12 01:49:43.909807] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:17.711 [2024-07-12 01:49:43.909812] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:17.711 [2024-07-12 01:49:43.909822] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.909832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.711 [2024-07-12 01:49:43.909841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc68a0, cid 4, qid 0 00:31:17.711 [2024-07-12 01:49:43.909908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.711 [2024-07-12 01:49:43.909914] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.711 [2024-07-12 01:49:43.909918] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909921] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58fb0): datao=0, datal=4096, cccid=4 00:31:17.711 [2024-07-12 01:49:43.909926] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc68a0) on tqpair(0xd58fb0): expected_datao=0, payload_size=4096 00:31:17.711 [2024-07-12 01:49:43.909930] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909951] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.909955] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.711 [2024-07-12 01:49:43.950289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.711 [2024-07-12 01:49:43.950294] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950299] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc68a0) on tqpair=0xd58fb0 00:31:17.711 [2024-07-12 01:49:43.950311] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:17.711 [2024-07-12 01:49:43.950332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950337] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.950344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.711 [2024-07-12 01:49:43.950351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950354] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950358] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.950364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.711 [2024-07-12 01:49:43.950380] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc68a0, cid 4, qid 0 00:31:17.711 [2024-07-12 01:49:43.950385] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6a00, cid 5, qid 0 00:31:17.711 [2024-07-12 01:49:43.950486] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.711 [2024-07-12 01:49:43.950493] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.711 [2024-07-12 01:49:43.950496] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950499] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58fb0): datao=0, datal=1024, cccid=4 00:31:17.711 [2024-07-12 01:49:43.950504] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc68a0) on tqpair(0xd58fb0): expected_datao=0, payload_size=1024 00:31:17.711 [2024-07-12 01:49:43.950508] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950515] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950518] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.711 [2024-07-12 01:49:43.950529] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.711 [2024-07-12 01:49:43.950533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.950536] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6a00) on tqpair=0xd58fb0 00:31:17.711 [2024-07-12 01:49:43.991286] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.711 [2024-07-12 01:49:43.991295] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.711 [2024-07-12 01:49:43.991298] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991302] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc68a0) on tqpair=0xd58fb0 00:31:17.711 [2024-07-12 01:49:43.991312] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.991322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.711 [2024-07-12 01:49:43.991336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc68a0, cid 4, qid 0 00:31:17.711 [2024-07-12 01:49:43.991410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.711 [2024-07-12 01:49:43.991416] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.711 [2024-07-12 01:49:43.991420] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991423] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58fb0): datao=0, datal=3072, cccid=4 00:31:17.711 [2024-07-12 01:49:43.991430] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc68a0) on tqpair(0xd58fb0): expected_datao=0, payload_size=3072 00:31:17.711 [2024-07-12 01:49:43.991434] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991441] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991444] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991463] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.711 [2024-07-12 01:49:43.991469] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.711 [2024-07-12 01:49:43.991473] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991476] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc68a0) on tqpair=0xd58fb0 00:31:17.711 [2024-07-12 01:49:43.991484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58fb0) 00:31:17.711 [2024-07-12 01:49:43.991494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.711 [2024-07-12 01:49:43.991507] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc68a0, cid 4, qid 0 00:31:17.711 [2024-07-12 01:49:43.991613] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.711 [2024-07-12 01:49:43.991619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.711 [2024-07-12 01:49:43.991622] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991626] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58fb0): datao=0, datal=8, cccid=4 00:31:17.711 [2024-07-12 01:49:43.991630] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc68a0) on tqpair(0xd58fb0): expected_datao=0, payload_size=8 00:31:17.711 [2024-07-12 01:49:43.991634] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991641] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:43.991644] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:44.036239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.711 [2024-07-12 01:49:44.036251] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.711 [2024-07-12 01:49:44.036255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.711 [2024-07-12 01:49:44.036259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc68a0) on tqpair=0xd58fb0 00:31:17.711 ===================================================== 00:31:17.711 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:17.711 ===================================================== 00:31:17.711 Controller Capabilities/Features 00:31:17.711 ================================ 00:31:17.711 Vendor ID: 0000 00:31:17.711 Subsystem Vendor ID: 0000 00:31:17.711 Serial Number: .................... 00:31:17.711 Model Number: ........................................ 00:31:17.711 Firmware Version: 24.05.1 00:31:17.711 Recommended Arb Burst: 0 00:31:17.711 IEEE OUI Identifier: 00 00 00 00:31:17.711 Multi-path I/O 00:31:17.711 May have multiple subsystem ports: No 00:31:17.711 May have multiple controllers: No 00:31:17.711 Associated with SR-IOV VF: No 00:31:17.711 Max Data Transfer Size: 131072 00:31:17.711 Max Number of Namespaces: 0 00:31:17.711 Max Number of I/O Queues: 1024 00:31:17.711 NVMe Specification Version (VS): 1.3 00:31:17.711 NVMe Specification Version (Identify): 1.3 00:31:17.711 Maximum Queue Entries: 128 00:31:17.711 Contiguous Queues Required: Yes 00:31:17.711 Arbitration Mechanisms Supported 00:31:17.711 Weighted Round Robin: Not Supported 00:31:17.711 Vendor Specific: Not Supported 00:31:17.711 Reset Timeout: 15000 ms 00:31:17.711 Doorbell Stride: 4 bytes 00:31:17.711 NVM Subsystem Reset: Not Supported 00:31:17.711 Command Sets Supported 00:31:17.711 NVM Command Set: Supported 00:31:17.711 Boot Partition: Not Supported 00:31:17.711 Memory Page Size Minimum: 4096 bytes 00:31:17.711 Memory Page Size Maximum: 4096 bytes 00:31:17.711 Persistent Memory Region: Not Supported 00:31:17.711 Optional Asynchronous Events Supported 00:31:17.711 Namespace Attribute Notices: Not Supported 00:31:17.711 Firmware Activation Notices: Not Supported 00:31:17.711 ANA Change Notices: Not Supported 00:31:17.711 PLE Aggregate Log Change Notices: Not Supported 00:31:17.711 LBA Status Info Alert Notices: Not Supported 00:31:17.711 EGE Aggregate Log Change Notices: Not Supported 00:31:17.711 Normal NVM Subsystem Shutdown event: Not Supported 00:31:17.711 Zone Descriptor Change Notices: Not Supported 00:31:17.711 Discovery Log Change Notices: Supported 00:31:17.711 Controller Attributes 00:31:17.711 128-bit Host Identifier: Not Supported 00:31:17.711 Non-Operational Permissive Mode: Not Supported 00:31:17.711 NVM Sets: Not Supported 00:31:17.711 Read Recovery Levels: Not Supported 00:31:17.711 Endurance Groups: Not Supported 00:31:17.711 Predictable Latency Mode: Not Supported 00:31:17.711 Traffic Based Keep ALive: Not Supported 00:31:17.711 Namespace Granularity: Not Supported 00:31:17.711 SQ Associations: Not Supported 00:31:17.711 UUID List: Not Supported 00:31:17.711 Multi-Domain Subsystem: Not Supported 00:31:17.711 Fixed Capacity Management: Not Supported 00:31:17.711 Variable Capacity Management: Not Supported 00:31:17.711 Delete Endurance Group: Not Supported 00:31:17.711 Delete NVM Set: Not Supported 00:31:17.711 Extended LBA Formats Supported: Not Supported 00:31:17.711 Flexible Data Placement Supported: Not Supported 00:31:17.711 00:31:17.711 Controller Memory Buffer Support 00:31:17.711 ================================ 00:31:17.711 Supported: No 00:31:17.711 00:31:17.711 Persistent Memory Region Support 00:31:17.711 ================================ 00:31:17.711 Supported: No 00:31:17.711 00:31:17.711 Admin Command Set Attributes 00:31:17.711 ============================ 00:31:17.711 Security Send/Receive: Not Supported 00:31:17.711 Format NVM: Not Supported 00:31:17.711 Firmware Activate/Download: Not Supported 00:31:17.711 Namespace Management: Not Supported 00:31:17.711 Device Self-Test: Not Supported 00:31:17.711 Directives: Not Supported 00:31:17.711 NVMe-MI: Not Supported 00:31:17.712 Virtualization Management: Not Supported 00:31:17.712 Doorbell Buffer Config: Not Supported 00:31:17.712 Get LBA Status Capability: Not Supported 00:31:17.712 Command & Feature Lockdown Capability: Not Supported 00:31:17.712 Abort Command Limit: 1 00:31:17.712 Async Event Request Limit: 4 00:31:17.712 Number of Firmware Slots: N/A 00:31:17.712 Firmware Slot 1 Read-Only: N/A 00:31:17.712 Firmware Activation Without Reset: N/A 00:31:17.712 Multiple Update Detection Support: N/A 00:31:17.712 Firmware Update Granularity: No Information Provided 00:31:17.712 Per-Namespace SMART Log: No 00:31:17.712 Asymmetric Namespace Access Log Page: Not Supported 00:31:17.712 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:17.712 Command Effects Log Page: Not Supported 00:31:17.712 Get Log Page Extended Data: Supported 00:31:17.712 Telemetry Log Pages: Not Supported 00:31:17.712 Persistent Event Log Pages: Not Supported 00:31:17.712 Supported Log Pages Log Page: May Support 00:31:17.712 Commands Supported & Effects Log Page: Not Supported 00:31:17.712 Feature Identifiers & Effects Log Page:May Support 00:31:17.712 NVMe-MI Commands & Effects Log Page: May Support 00:31:17.712 Data Area 4 for Telemetry Log: Not Supported 00:31:17.712 Error Log Page Entries Supported: 128 00:31:17.712 Keep Alive: Not Supported 00:31:17.712 00:31:17.712 NVM Command Set Attributes 00:31:17.712 ========================== 00:31:17.712 Submission Queue Entry Size 00:31:17.712 Max: 1 00:31:17.712 Min: 1 00:31:17.712 Completion Queue Entry Size 00:31:17.712 Max: 1 00:31:17.712 Min: 1 00:31:17.712 Number of Namespaces: 0 00:31:17.712 Compare Command: Not Supported 00:31:17.712 Write Uncorrectable Command: Not Supported 00:31:17.712 Dataset Management Command: Not Supported 00:31:17.712 Write Zeroes Command: Not Supported 00:31:17.712 Set Features Save Field: Not Supported 00:31:17.712 Reservations: Not Supported 00:31:17.712 Timestamp: Not Supported 00:31:17.712 Copy: Not Supported 00:31:17.712 Volatile Write Cache: Not Present 00:31:17.712 Atomic Write Unit (Normal): 1 00:31:17.712 Atomic Write Unit (PFail): 1 00:31:17.712 Atomic Compare & Write Unit: 1 00:31:17.712 Fused Compare & Write: Supported 00:31:17.712 Scatter-Gather List 00:31:17.712 SGL Command Set: Supported 00:31:17.712 SGL Keyed: Supported 00:31:17.712 SGL Bit Bucket Descriptor: Not Supported 00:31:17.712 SGL Metadata Pointer: Not Supported 00:31:17.712 Oversized SGL: Not Supported 00:31:17.712 SGL Metadata Address: Not Supported 00:31:17.712 SGL Offset: Supported 00:31:17.712 Transport SGL Data Block: Not Supported 00:31:17.712 Replay Protected Memory Block: Not Supported 00:31:17.712 00:31:17.712 Firmware Slot Information 00:31:17.712 ========================= 00:31:17.712 Active slot: 0 00:31:17.712 00:31:17.712 00:31:17.712 Error Log 00:31:17.712 ========= 00:31:17.712 00:31:17.712 Active Namespaces 00:31:17.712 ================= 00:31:17.712 Discovery Log Page 00:31:17.712 ================== 00:31:17.712 Generation Counter: 2 00:31:17.712 Number of Records: 2 00:31:17.712 Record Format: 0 00:31:17.712 00:31:17.712 Discovery Log Entry 0 00:31:17.712 ---------------------- 00:31:17.712 Transport Type: 3 (TCP) 00:31:17.712 Address Family: 1 (IPv4) 00:31:17.712 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:17.712 Entry Flags: 00:31:17.712 Duplicate Returned Information: 1 00:31:17.712 Explicit Persistent Connection Support for Discovery: 1 00:31:17.712 Transport Requirements: 00:31:17.712 Secure Channel: Not Required 00:31:17.712 Port ID: 0 (0x0000) 00:31:17.712 Controller ID: 65535 (0xffff) 00:31:17.712 Admin Max SQ Size: 128 00:31:17.712 Transport Service Identifier: 4420 00:31:17.712 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:17.712 Transport Address: 10.0.0.2 00:31:17.712 Discovery Log Entry 1 00:31:17.712 ---------------------- 00:31:17.712 Transport Type: 3 (TCP) 00:31:17.712 Address Family: 1 (IPv4) 00:31:17.712 Subsystem Type: 2 (NVM Subsystem) 00:31:17.712 Entry Flags: 00:31:17.712 Duplicate Returned Information: 0 00:31:17.712 Explicit Persistent Connection Support for Discovery: 0 00:31:17.712 Transport Requirements: 00:31:17.712 Secure Channel: Not Required 00:31:17.712 Port ID: 0 (0x0000) 00:31:17.712 Controller ID: 65535 (0xffff) 00:31:17.712 Admin Max SQ Size: 128 00:31:17.712 Transport Service Identifier: 4420 00:31:17.712 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:17.712 Transport Address: 10.0.0.2 [2024-07-12 01:49:44.036345] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:17.712 [2024-07-12 01:49:44.036358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.712 [2024-07-12 01:49:44.036365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.712 [2024-07-12 01:49:44.036371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.712 [2024-07-12 01:49:44.036377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.712 [2024-07-12 01:49:44.036387] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036391] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036395] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.036402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.036416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.036480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.036489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.036492] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036496] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.036503] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036510] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.036516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.036529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.036596] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.036602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.036605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036609] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.036614] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:17.712 [2024-07-12 01:49:44.036618] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:17.712 [2024-07-12 01:49:44.036627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.036641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.036651] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.036713] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.036719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.036723] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.036736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036743] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.036750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.036760] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.036829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.036835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.036838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036842] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.036852] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036856] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036859] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.036866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.036877] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.036940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.036946] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.036949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036953] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.036962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036966] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.036969] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.036976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.036986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.037050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.037056] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.037060] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037064] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.037073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037080] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.037087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.037097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.037159] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.037165] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.037168] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037172] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.037181] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037188] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.037195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.037205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.037310] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.037317] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.037320] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.037334] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037337] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037341] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.712 [2024-07-12 01:49:44.037347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.712 [2024-07-12 01:49:44.037357] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.712 [2024-07-12 01:49:44.037422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.712 [2024-07-12 01:49:44.037428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.712 [2024-07-12 01:49:44.037431] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037435] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.712 [2024-07-12 01:49:44.037444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.712 [2024-07-12 01:49:44.037452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.037458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.037468] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.037530] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.037536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.037540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037543] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.037553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.037567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.037576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.037641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.037647] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.037651] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.037664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037668] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037671] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.037678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.037687] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.037790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.037796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.037800] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.037813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.037827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.037836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.037902] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.037911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.037915] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037918] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.037928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.037935] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.037942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.037951] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038023] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038027] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038044] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038060] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038135] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038148] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038155] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038241] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038378] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038398] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038461] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038467] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038470] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038474] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038507] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038578] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038582] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038599] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038697] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038706] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038710] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038730] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038795] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038805] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.038906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.038912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.038916] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038919] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.038929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.038936] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.038942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.038952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.039017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.039023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.039026] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.039030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.039040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.039043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.039047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.713 [2024-07-12 01:49:44.039053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.713 [2024-07-12 01:49:44.039063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.713 [2024-07-12 01:49:44.039125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.713 [2024-07-12 01:49:44.039131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.713 [2024-07-12 01:49:44.039134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.713 [2024-07-12 01:49:44.039138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.713 [2024-07-12 01:49:44.039147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039290] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039349] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039359] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039363] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039379] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039458] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039578] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039686] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039689] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039693] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039702] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039712] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039800] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039816] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.039898] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.039904] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.039908] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039912] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.039921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039925] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.039928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.039935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.039944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.040009] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.040016] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.040019] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.040023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.040032] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.040036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.040039] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.040046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.040056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.040124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.040130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.040134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.040137] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.040147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.040150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.040156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.040162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.040172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.044237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.044245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.044248] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.044252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.044262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.044266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.044270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58fb0) 00:31:17.714 [2024-07-12 01:49:44.044276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.714 [2024-07-12 01:49:44.044288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc6740, cid 3, qid 0 00:31:17.714 [2024-07-12 01:49:44.044368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.714 [2024-07-12 01:49:44.044374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.714 [2024-07-12 01:49:44.044377] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.714 [2024-07-12 01:49:44.044381] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdc6740) on tqpair=0xd58fb0 00:31:17.714 [2024-07-12 01:49:44.044388] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:31:17.714 00:31:17.714 01:49:44 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:17.978 [2024-07-12 01:49:44.082375] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:17.978 [2024-07-12 01:49:44.082429] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153742 ] 00:31:17.978 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.978 [2024-07-12 01:49:44.114756] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:17.978 [2024-07-12 01:49:44.114803] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:17.978 [2024-07-12 01:49:44.114808] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:17.978 [2024-07-12 01:49:44.114819] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:17.978 [2024-07-12 01:49:44.114827] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:17.978 [2024-07-12 01:49:44.118256] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:17.978 [2024-07-12 01:49:44.118286] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcc3fb0 0 00:31:17.978 [2024-07-12 01:49:44.126236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:17.978 [2024-07-12 01:49:44.126245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:17.978 [2024-07-12 01:49:44.126249] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:17.978 [2024-07-12 01:49:44.126256] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:17.978 [2024-07-12 01:49:44.126285] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.126290] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.126294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.126306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:17.978 [2024-07-12 01:49:44.126321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.134240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.134249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.134252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134257] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.134268] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:17.978 [2024-07-12 01:49:44.134273] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:17.978 [2024-07-12 01:49:44.134278] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:17.978 [2024-07-12 01:49:44.134290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.134305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.134317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.134512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.134519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.134522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.134533] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:17.978 [2024-07-12 01:49:44.134540] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:17.978 [2024-07-12 01:49:44.134547] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134555] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.134561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.134572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.134736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.134742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.134745] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134749] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.134754] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:17.978 [2024-07-12 01:49:44.134761] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:17.978 [2024-07-12 01:49:44.134770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.134784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.134794] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.134963] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.134969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.134972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.134981] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:17.978 [2024-07-12 01:49:44.134990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.134997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.135004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.135014] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.135184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.135190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.135194] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.135201] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:17.978 [2024-07-12 01:49:44.135206] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:17.978 [2024-07-12 01:49:44.135213] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:17.978 [2024-07-12 01:49:44.135319] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:17.978 [2024-07-12 01:49:44.135323] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:17.978 [2024-07-12 01:49:44.135330] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135334] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135338] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.135344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.135354] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.135560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.135566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.135569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.135577] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:17.978 [2024-07-12 01:49:44.135589] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135593] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.135603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.135613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.135790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.135796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.135799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.135807] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:17.978 [2024-07-12 01:49:44.135812] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.135820] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:17.978 [2024-07-12 01:49:44.135827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.135836] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.135840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.135847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.135857] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.136098] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.978 [2024-07-12 01:49:44.136104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.978 [2024-07-12 01:49:44.136108] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.136111] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=4096, cccid=0 00:31:17.978 [2024-07-12 01:49:44.136116] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd31320) on tqpair(0xcc3fb0): expected_datao=0, payload_size=4096 00:31:17.978 [2024-07-12 01:49:44.136120] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.136132] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.136136] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176305] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.176314] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.176317] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176321] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.176331] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:17.978 [2024-07-12 01:49:44.176336] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:17.978 [2024-07-12 01:49:44.176340] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:17.978 [2024-07-12 01:49:44.176344] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:17.978 [2024-07-12 01:49:44.176350] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:17.978 [2024-07-12 01:49:44.176355] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.176364] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.176370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176374] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176377] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.176384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.978 [2024-07-12 01:49:44.176396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.176666] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.176673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.176676] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176680] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31320) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.176686] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.176700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.978 [2024-07-12 01:49:44.176706] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.176718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.978 [2024-07-12 01:49:44.176724] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.176737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.978 [2024-07-12 01:49:44.176743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176749] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.176755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.978 [2024-07-12 01:49:44.176760] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.176769] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.176776] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.176779] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.176786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.176799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31320, cid 0, qid 0 00:31:17.978 [2024-07-12 01:49:44.176804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31480, cid 1, qid 0 00:31:17.978 [2024-07-12 01:49:44.176809] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd315e0, cid 2, qid 0 00:31:17.978 [2024-07-12 01:49:44.176813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.978 [2024-07-12 01:49:44.176818] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.978 [2024-07-12 01:49:44.177033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.177040] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.177043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.177051] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:17.978 [2024-07-12 01:49:44.177056] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.177064] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.177070] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.177076] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.177089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.978 [2024-07-12 01:49:44.177099] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.978 [2024-07-12 01:49:44.177283] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.177289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.177293] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177296] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.177361] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.177369] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.177376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.177386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.177397] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.978 [2024-07-12 01:49:44.177572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.978 [2024-07-12 01:49:44.177578] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.978 [2024-07-12 01:49:44.177582] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177585] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=4096, cccid=4 00:31:17.978 [2024-07-12 01:49:44.177589] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd318a0) on tqpair(0xcc3fb0): expected_datao=0, payload_size=4096 00:31:17.978 [2024-07-12 01:49:44.177596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177618] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.177622] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.222237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.222247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.222251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.222255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.978 [2024-07-12 01:49:44.222264] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:17.978 [2024-07-12 01:49:44.222279] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.222288] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:17.978 [2024-07-12 01:49:44.222295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.222298] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.978 [2024-07-12 01:49:44.222305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.978 [2024-07-12 01:49:44.222317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.978 [2024-07-12 01:49:44.222521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.978 [2024-07-12 01:49:44.222528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.978 [2024-07-12 01:49:44.222531] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.222534] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=4096, cccid=4 00:31:17.978 [2024-07-12 01:49:44.222539] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd318a0) on tqpair(0xcc3fb0): expected_datao=0, payload_size=4096 00:31:17.978 [2024-07-12 01:49:44.222543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.222606] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.222610] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.263376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.978 [2024-07-12 01:49:44.263386] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.978 [2024-07-12 01:49:44.263389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.978 [2024-07-12 01:49:44.263393] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.263405] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.263413] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.263420] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.263424] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.263431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.263442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.979 [2024-07-12 01:49:44.263632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.979 [2024-07-12 01:49:44.263639] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.979 [2024-07-12 01:49:44.263644] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.263648] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=4096, cccid=4 00:31:17.979 [2024-07-12 01:49:44.263652] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd318a0) on tqpair(0xcc3fb0): expected_datao=0, payload_size=4096 00:31:17.979 [2024-07-12 01:49:44.263657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.263663] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.263667] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.305385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.305388] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.305400] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.305407] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.305416] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.305422] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.305426] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.305432] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:17.979 [2024-07-12 01:49:44.305436] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:17.979 [2024-07-12 01:49:44.305441] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:17.979 [2024-07-12 01:49:44.305456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305460] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.305466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.305473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.305486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.979 [2024-07-12 01:49:44.305499] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.979 [2024-07-12 01:49:44.305504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31a00, cid 5, qid 0 00:31:17.979 [2024-07-12 01:49:44.305698] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.305704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.305707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305711] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.305717] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.305723] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.305729] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305732] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31a00) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.305741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.305751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.305761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31a00, cid 5, qid 0 00:31:17.979 [2024-07-12 01:49:44.305921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.305927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.305931] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305934] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31a00) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.305943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.305947] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.305953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.305962] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31a00, cid 5, qid 0 00:31:17.979 [2024-07-12 01:49:44.306146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.306152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.306156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.306159] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31a00) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.306168] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.306172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.306178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.306187] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31a00, cid 5, qid 0 00:31:17.979 [2024-07-12 01:49:44.310238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.310246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.310249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31a00) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.310264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.310274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.310281] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.310291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.310298] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.310307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.310317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcc3fb0) 00:31:17.979 [2024-07-12 01:49:44.310327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.979 [2024-07-12 01:49:44.310339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31a00, cid 5, qid 0 00:31:17.979 [2024-07-12 01:49:44.310344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd318a0, cid 4, qid 0 00:31:17.979 [2024-07-12 01:49:44.310348] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31b60, cid 6, qid 0 00:31:17.979 [2024-07-12 01:49:44.310353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31cc0, cid 7, qid 0 00:31:17.979 [2024-07-12 01:49:44.310585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.979 [2024-07-12 01:49:44.310591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.979 [2024-07-12 01:49:44.310595] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310598] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=8192, cccid=5 00:31:17.979 [2024-07-12 01:49:44.310602] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd31a00) on tqpair(0xcc3fb0): expected_datao=0, payload_size=8192 00:31:17.979 [2024-07-12 01:49:44.310607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310668] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310672] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.979 [2024-07-12 01:49:44.310683] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.979 [2024-07-12 01:49:44.310687] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310690] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=512, cccid=4 00:31:17.979 [2024-07-12 01:49:44.310695] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd318a0) on tqpair(0xcc3fb0): expected_datao=0, payload_size=512 00:31:17.979 [2024-07-12 01:49:44.310699] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310705] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310709] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310714] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.979 [2024-07-12 01:49:44.310720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.979 [2024-07-12 01:49:44.310723] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310727] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=512, cccid=6 00:31:17.979 [2024-07-12 01:49:44.310731] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd31b60) on tqpair(0xcc3fb0): expected_datao=0, payload_size=512 00:31:17.979 [2024-07-12 01:49:44.310735] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310741] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310745] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:17.979 [2024-07-12 01:49:44.310756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:17.979 [2024-07-12 01:49:44.310760] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310763] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc3fb0): datao=0, datal=4096, cccid=7 00:31:17.979 [2024-07-12 01:49:44.310769] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd31cc0) on tqpair(0xcc3fb0): expected_datao=0, payload_size=4096 00:31:17.979 [2024-07-12 01:49:44.310774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310780] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310784] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310802] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.310808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.310812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310815] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31a00) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.310828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.310834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.310837] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310841] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd318a0) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.310849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.310855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.310859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310862] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31b60) on tqpair=0xcc3fb0 00:31:17.979 [2024-07-12 01:49:44.310871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.979 [2024-07-12 01:49:44.310877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.979 [2024-07-12 01:49:44.310880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.979 [2024-07-12 01:49:44.310883] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31cc0) on tqpair=0xcc3fb0 00:31:17.979 ===================================================== 00:31:17.979 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.979 ===================================================== 00:31:17.979 Controller Capabilities/Features 00:31:17.979 ================================ 00:31:17.979 Vendor ID: 8086 00:31:17.979 Subsystem Vendor ID: 8086 00:31:17.979 Serial Number: SPDK00000000000001 00:31:17.979 Model Number: SPDK bdev Controller 00:31:17.979 Firmware Version: 24.05.1 00:31:17.979 Recommended Arb Burst: 6 00:31:17.979 IEEE OUI Identifier: e4 d2 5c 00:31:17.979 Multi-path I/O 00:31:17.979 May have multiple subsystem ports: Yes 00:31:17.979 May have multiple controllers: Yes 00:31:17.979 Associated with SR-IOV VF: No 00:31:17.979 Max Data Transfer Size: 131072 00:31:17.979 Max Number of Namespaces: 32 00:31:17.979 Max Number of I/O Queues: 127 00:31:17.979 NVMe Specification Version (VS): 1.3 00:31:17.979 NVMe Specification Version (Identify): 1.3 00:31:17.979 Maximum Queue Entries: 128 00:31:17.979 Contiguous Queues Required: Yes 00:31:17.979 Arbitration Mechanisms Supported 00:31:17.979 Weighted Round Robin: Not Supported 00:31:17.979 Vendor Specific: Not Supported 00:31:17.979 Reset Timeout: 15000 ms 00:31:17.979 Doorbell Stride: 4 bytes 00:31:17.979 NVM Subsystem Reset: Not Supported 00:31:17.979 Command Sets Supported 00:31:17.979 NVM Command Set: Supported 00:31:17.979 Boot Partition: Not Supported 00:31:17.979 Memory Page Size Minimum: 4096 bytes 00:31:17.979 Memory Page Size Maximum: 4096 bytes 00:31:17.979 Persistent Memory Region: Not Supported 00:31:17.979 Optional Asynchronous Events Supported 00:31:17.979 Namespace Attribute Notices: Supported 00:31:17.979 Firmware Activation Notices: Not Supported 00:31:17.979 ANA Change Notices: Not Supported 00:31:17.979 PLE Aggregate Log Change Notices: Not Supported 00:31:17.979 LBA Status Info Alert Notices: Not Supported 00:31:17.979 EGE Aggregate Log Change Notices: Not Supported 00:31:17.979 Normal NVM Subsystem Shutdown event: Not Supported 00:31:17.979 Zone Descriptor Change Notices: Not Supported 00:31:17.979 Discovery Log Change Notices: Not Supported 00:31:17.979 Controller Attributes 00:31:17.979 128-bit Host Identifier: Supported 00:31:17.979 Non-Operational Permissive Mode: Not Supported 00:31:17.979 NVM Sets: Not Supported 00:31:17.979 Read Recovery Levels: Not Supported 00:31:17.979 Endurance Groups: Not Supported 00:31:17.979 Predictable Latency Mode: Not Supported 00:31:17.979 Traffic Based Keep ALive: Not Supported 00:31:17.979 Namespace Granularity: Not Supported 00:31:17.979 SQ Associations: Not Supported 00:31:17.979 UUID List: Not Supported 00:31:17.979 Multi-Domain Subsystem: Not Supported 00:31:17.979 Fixed Capacity Management: Not Supported 00:31:17.979 Variable Capacity Management: Not Supported 00:31:17.979 Delete Endurance Group: Not Supported 00:31:17.979 Delete NVM Set: Not Supported 00:31:17.979 Extended LBA Formats Supported: Not Supported 00:31:17.979 Flexible Data Placement Supported: Not Supported 00:31:17.979 00:31:17.979 Controller Memory Buffer Support 00:31:17.979 ================================ 00:31:17.979 Supported: No 00:31:17.979 00:31:17.979 Persistent Memory Region Support 00:31:17.979 ================================ 00:31:17.979 Supported: No 00:31:17.979 00:31:17.979 Admin Command Set Attributes 00:31:17.979 ============================ 00:31:17.979 Security Send/Receive: Not Supported 00:31:17.979 Format NVM: Not Supported 00:31:17.979 Firmware Activate/Download: Not Supported 00:31:17.979 Namespace Management: Not Supported 00:31:17.979 Device Self-Test: Not Supported 00:31:17.979 Directives: Not Supported 00:31:17.979 NVMe-MI: Not Supported 00:31:17.979 Virtualization Management: Not Supported 00:31:17.979 Doorbell Buffer Config: Not Supported 00:31:17.979 Get LBA Status Capability: Not Supported 00:31:17.979 Command & Feature Lockdown Capability: Not Supported 00:31:17.979 Abort Command Limit: 4 00:31:17.979 Async Event Request Limit: 4 00:31:17.979 Number of Firmware Slots: N/A 00:31:17.979 Firmware Slot 1 Read-Only: N/A 00:31:17.979 Firmware Activation Without Reset: N/A 00:31:17.979 Multiple Update Detection Support: N/A 00:31:17.979 Firmware Update Granularity: No Information Provided 00:31:17.979 Per-Namespace SMART Log: No 00:31:17.979 Asymmetric Namespace Access Log Page: Not Supported 00:31:17.979 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:17.979 Command Effects Log Page: Supported 00:31:17.979 Get Log Page Extended Data: Supported 00:31:17.979 Telemetry Log Pages: Not Supported 00:31:17.979 Persistent Event Log Pages: Not Supported 00:31:17.979 Supported Log Pages Log Page: May Support 00:31:17.979 Commands Supported & Effects Log Page: Not Supported 00:31:17.979 Feature Identifiers & Effects Log Page:May Support 00:31:17.979 NVMe-MI Commands & Effects Log Page: May Support 00:31:17.979 Data Area 4 for Telemetry Log: Not Supported 00:31:17.979 Error Log Page Entries Supported: 128 00:31:17.979 Keep Alive: Supported 00:31:17.979 Keep Alive Granularity: 10000 ms 00:31:17.979 00:31:17.979 NVM Command Set Attributes 00:31:17.979 ========================== 00:31:17.979 Submission Queue Entry Size 00:31:17.979 Max: 64 00:31:17.979 Min: 64 00:31:17.979 Completion Queue Entry Size 00:31:17.980 Max: 16 00:31:17.980 Min: 16 00:31:17.980 Number of Namespaces: 32 00:31:17.980 Compare Command: Supported 00:31:17.980 Write Uncorrectable Command: Not Supported 00:31:17.980 Dataset Management Command: Supported 00:31:17.980 Write Zeroes Command: Supported 00:31:17.980 Set Features Save Field: Not Supported 00:31:17.980 Reservations: Supported 00:31:17.980 Timestamp: Not Supported 00:31:17.980 Copy: Supported 00:31:17.980 Volatile Write Cache: Present 00:31:17.980 Atomic Write Unit (Normal): 1 00:31:17.980 Atomic Write Unit (PFail): 1 00:31:17.980 Atomic Compare & Write Unit: 1 00:31:17.980 Fused Compare & Write: Supported 00:31:17.980 Scatter-Gather List 00:31:17.980 SGL Command Set: Supported 00:31:17.980 SGL Keyed: Supported 00:31:17.980 SGL Bit Bucket Descriptor: Not Supported 00:31:17.980 SGL Metadata Pointer: Not Supported 00:31:17.980 Oversized SGL: Not Supported 00:31:17.980 SGL Metadata Address: Not Supported 00:31:17.980 SGL Offset: Supported 00:31:17.980 Transport SGL Data Block: Not Supported 00:31:17.980 Replay Protected Memory Block: Not Supported 00:31:17.980 00:31:17.980 Firmware Slot Information 00:31:17.980 ========================= 00:31:17.980 Active slot: 1 00:31:17.980 Slot 1 Firmware Revision: 24.05.1 00:31:17.980 00:31:17.980 00:31:17.980 Commands Supported and Effects 00:31:17.980 ============================== 00:31:17.980 Admin Commands 00:31:17.980 -------------- 00:31:17.980 Get Log Page (02h): Supported 00:31:17.980 Identify (06h): Supported 00:31:17.980 Abort (08h): Supported 00:31:17.980 Set Features (09h): Supported 00:31:17.980 Get Features (0Ah): Supported 00:31:17.980 Asynchronous Event Request (0Ch): Supported 00:31:17.980 Keep Alive (18h): Supported 00:31:17.980 I/O Commands 00:31:17.980 ------------ 00:31:17.980 Flush (00h): Supported LBA-Change 00:31:17.980 Write (01h): Supported LBA-Change 00:31:17.980 Read (02h): Supported 00:31:17.980 Compare (05h): Supported 00:31:17.980 Write Zeroes (08h): Supported LBA-Change 00:31:17.980 Dataset Management (09h): Supported LBA-Change 00:31:17.980 Copy (19h): Supported LBA-Change 00:31:17.980 Unknown (79h): Supported LBA-Change 00:31:17.980 Unknown (7Ah): Supported 00:31:17.980 00:31:17.980 Error Log 00:31:17.980 ========= 00:31:17.980 00:31:17.980 Arbitration 00:31:17.980 =========== 00:31:17.980 Arbitration Burst: 1 00:31:17.980 00:31:17.980 Power Management 00:31:17.980 ================ 00:31:17.980 Number of Power States: 1 00:31:17.980 Current Power State: Power State #0 00:31:17.980 Power State #0: 00:31:17.980 Max Power: 0.00 W 00:31:17.980 Non-Operational State: Operational 00:31:17.980 Entry Latency: Not Reported 00:31:17.980 Exit Latency: Not Reported 00:31:17.980 Relative Read Throughput: 0 00:31:17.980 Relative Read Latency: 0 00:31:17.980 Relative Write Throughput: 0 00:31:17.980 Relative Write Latency: 0 00:31:17.980 Idle Power: Not Reported 00:31:17.980 Active Power: Not Reported 00:31:17.980 Non-Operational Permissive Mode: Not Supported 00:31:17.980 00:31:17.980 Health Information 00:31:17.980 ================== 00:31:17.980 Critical Warnings: 00:31:17.980 Available Spare Space: OK 00:31:17.980 Temperature: OK 00:31:17.980 Device Reliability: OK 00:31:17.980 Read Only: No 00:31:17.980 Volatile Memory Backup: OK 00:31:17.980 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:17.980 Temperature Threshold: [2024-07-12 01:49:44.310985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.980 [2024-07-12 01:49:44.310990] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcc3fb0) 00:31:17.980 [2024-07-12 01:49:44.310997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.980 [2024-07-12 01:49:44.311008] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31cc0, cid 7, qid 0 00:31:17.980 [2024-07-12 01:49:44.311177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.980 [2024-07-12 01:49:44.311184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.980 [2024-07-12 01:49:44.311187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.980 [2024-07-12 01:49:44.311191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31cc0) on tqpair=0xcc3fb0 00:31:17.980 [2024-07-12 01:49:44.311217] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:17.980 [2024-07-12 01:49:44.311228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.980 [2024-07-12 01:49:44.311239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.980 [2024-07-12 01:49:44.311246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.980 [2024-07-12 01:49:44.311252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.980 [2024-07-12 01:49:44.311260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.980 [2024-07-12 01:49:44.311263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.311274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.311295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.311480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.311486] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.311490] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311494] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.311500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.311514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.311527] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.311755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.311762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.311765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.311773] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:17.981 [2024-07-12 01:49:44.311778] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:17.981 [2024-07-12 01:49:44.311787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.311801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.311811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.311977] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.311984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.311987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.311991] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.312001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.312015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.312024] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.312204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.312210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.312214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.312227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.312246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.312256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.312472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.312479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.312482] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312486] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.312495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.312509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.312519] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.312697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.312704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.312707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312711] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.312720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312724] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312728] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.312734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.312744] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.312920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.312926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.312929] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.312943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.312950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.312957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.312966] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.313141] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.313148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.313151] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.313164] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313168] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313171] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.313180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.313190] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.313365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.313371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.313375] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313379] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.313388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313395] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.313402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.313412] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.313585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.313591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.313595] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313599] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.313609] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313613] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.313623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.313632] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.313817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.313824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.313827] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313831] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.313840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313844] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.313848] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.313854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.313864] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.314019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.314025] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.314028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.314032] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.314042] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.314046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.314049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.314056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.314067] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.318237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.318245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.318249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.318252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.318262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.318266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.318269] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc3fb0) 00:31:17.981 [2024-07-12 01:49:44.318276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.981 [2024-07-12 01:49:44.318287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd31740, cid 3, qid 0 00:31:17.981 [2024-07-12 01:49:44.318468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:17.981 [2024-07-12 01:49:44.318474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:17.981 [2024-07-12 01:49:44.318478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:17.981 [2024-07-12 01:49:44.318482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd31740) on tqpair=0xcc3fb0 00:31:17.981 [2024-07-12 01:49:44.318489] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:31:17.981 0 Kelvin (-273 Celsius) 00:31:17.981 Available Spare: 0% 00:31:17.981 Available Spare Threshold: 0% 00:31:17.981 Life Percentage Used: 0% 00:31:17.981 Data Units Read: 0 00:31:17.981 Data Units Written: 0 00:31:17.981 Host Read Commands: 0 00:31:17.981 Host Write Commands: 0 00:31:17.981 Controller Busy Time: 0 minutes 00:31:17.981 Power Cycles: 0 00:31:17.981 Power On Hours: 0 hours 00:31:17.981 Unsafe Shutdowns: 0 00:31:17.981 Unrecoverable Media Errors: 0 00:31:17.981 Lifetime Error Log Entries: 0 00:31:17.981 Warning Temperature Time: 0 minutes 00:31:17.981 Critical Temperature Time: 0 minutes 00:31:17.981 00:31:17.981 Number of Queues 00:31:17.981 ================ 00:31:17.981 Number of I/O Submission Queues: 127 00:31:17.981 Number of I/O Completion Queues: 127 00:31:17.981 00:31:17.981 Active Namespaces 00:31:17.981 ================= 00:31:17.981 Namespace ID:1 00:31:17.981 Error Recovery Timeout: Unlimited 00:31:17.981 Command Set Identifier: NVM (00h) 00:31:17.981 Deallocate: Supported 00:31:17.981 Deallocated/Unwritten Error: Not Supported 00:31:17.981 Deallocated Read Value: Unknown 00:31:17.981 Deallocate in Write Zeroes: Not Supported 00:31:17.981 Deallocated Guard Field: 0xFFFF 00:31:17.981 Flush: Supported 00:31:17.981 Reservation: Supported 00:31:17.981 Namespace Sharing Capabilities: Multiple Controllers 00:31:17.981 Size (in LBAs): 131072 (0GiB) 00:31:17.981 Capacity (in LBAs): 131072 (0GiB) 00:31:17.981 Utilization (in LBAs): 131072 (0GiB) 00:31:17.981 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:17.981 EUI64: ABCDEF0123456789 00:31:17.981 UUID: a124c774-b459-4d02-add5-dd20f087e980 00:31:17.981 Thin Provisioning: Not Supported 00:31:17.981 Per-NS Atomic Units: Yes 00:31:17.981 Atomic Boundary Size (Normal): 0 00:31:17.981 Atomic Boundary Size (PFail): 0 00:31:17.981 Atomic Boundary Offset: 0 00:31:17.981 Maximum Single Source Range Length: 65535 00:31:17.981 Maximum Copy Length: 65535 00:31:17.981 Maximum Source Range Count: 1 00:31:17.981 NGUID/EUI64 Never Reused: No 00:31:17.981 Namespace Write Protected: No 00:31:17.981 Number of LBA Formats: 1 00:31:17.981 Current LBA Format: LBA Format #00 00:31:17.981 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:17.981 00:31:17.981 01:49:44 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:18.241 rmmod nvme_tcp 00:31:18.241 rmmod nvme_fabrics 00:31:18.241 rmmod nvme_keyring 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4153391 ']' 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4153391 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 4153391 ']' 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 4153391 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4153391 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4153391' 00:31:18.241 killing process with pid 4153391 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 4153391 00:31:18.241 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 4153391 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.501 01:49:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.412 01:49:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.412 00:31:20.412 real 0m12.170s 00:31:20.412 user 0m8.567s 00:31:20.412 sys 0m6.548s 00:31:20.412 01:49:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:20.412 01:49:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.413 ************************************ 00:31:20.413 END TEST nvmf_identify 00:31:20.413 ************************************ 00:31:20.413 01:49:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:20.413 01:49:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:20.413 01:49:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:20.413 01:49:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:20.413 ************************************ 00:31:20.413 START TEST nvmf_perf 00:31:20.413 ************************************ 00:31:20.413 01:49:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:20.673 * Looking for test storage... 00:31:20.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.673 01:49:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.674 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.674 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.674 01:49:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.674 01:49:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:28.839 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:28.839 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:28.839 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:28.840 Found net devices under 0000:31:00.0: cvl_0_0 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:28.840 Found net devices under 0000:31:00.1: cvl_0_1 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:28.840 01:49:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:28.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:31:28.840 00:31:28.840 --- 10.0.0.2 ping statistics --- 00:31:28.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.840 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:31:28.840 00:31:28.840 --- 10.0.0.1 ping statistics --- 00:31:28.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.840 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4158415 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4158415 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 4158415 ']' 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:28.840 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:28.840 [2024-07-12 01:49:55.121281] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:28.840 [2024-07-12 01:49:55.121320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.840 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.185 [2024-07-12 01:49:55.183310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.185 [2024-07-12 01:49:55.214876] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.185 [2024-07-12 01:49:55.214914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.185 [2024-07-12 01:49:55.214922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.185 [2024-07-12 01:49:55.214929] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.185 [2024-07-12 01:49:55.214934] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.185 [2024-07-12 01:49:55.215068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.185 [2024-07-12 01:49:55.215205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.185 [2024-07-12 01:49:55.215360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.185 [2024-07-12 01:49:55.215471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:29.185 01:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:29.765 01:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:29.765 01:49:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:29.765 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:29.765 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.024 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:30.024 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:30.024 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:30.024 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:30.024 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:30.024 [2024-07-12 01:49:56.328346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.024 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:30.284 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:30.284 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.544 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:30.544 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:30.544 01:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.805 [2024-07-12 01:49:57.010897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.805 01:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:31.064 01:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:31.064 01:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:31.064 01:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:31.064 01:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:32.446 Initializing NVMe Controllers 00:31:32.446 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:32.446 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:32.446 Initialization complete. Launching workers. 00:31:32.446 ======================================================== 00:31:32.446 Latency(us) 00:31:32.446 Device Information : IOPS MiB/s Average min max 00:31:32.446 PCIE (0000:65:00.0) NSID 1 from core 0: 79605.70 310.96 401.24 13.35 6305.16 00:31:32.446 ======================================================== 00:31:32.446 Total : 79605.70 310.96 401.24 13.35 6305.16 00:31:32.446 00:31:32.446 01:49:58 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.446 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.826 Initializing NVMe Controllers 00:31:33.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:33.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:33.826 Initialization complete. Launching workers. 00:31:33.826 ======================================================== 00:31:33.826 Latency(us) 00:31:33.826 Device Information : IOPS MiB/s Average min max 00:31:33.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.00 0.26 15312.36 339.40 46344.26 00:31:33.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17975.03 5987.28 47889.79 00:31:33.826 ======================================================== 00:31:33.826 Total : 123.00 0.48 16524.63 339.40 47889.79 00:31:33.826 00:31:33.826 01:49:59 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.826 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.765 Initializing NVMe Controllers 00:31:34.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:34.765 Initialization complete. Launching workers. 00:31:34.765 ======================================================== 00:31:34.765 Latency(us) 00:31:34.765 Device Information : IOPS MiB/s Average min max 00:31:34.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10665.43 41.66 3015.71 426.85 43232.12 00:31:34.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3713.97 14.51 8652.90 6957.84 19125.50 00:31:34.765 ======================================================== 00:31:34.765 Total : 14379.40 56.17 4471.71 426.85 43232.12 00:31:34.765 00:31:34.765 01:50:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:34.765 01:50:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:34.765 01:50:01 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:35.025 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.563 Initializing NVMe Controllers 00:31:37.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.563 Controller IO queue size 128, less than required. 00:31:37.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.563 Controller IO queue size 128, less than required. 00:31:37.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:37.563 Initialization complete. Launching workers. 00:31:37.563 ======================================================== 00:31:37.563 Latency(us) 00:31:37.563 Device Information : IOPS MiB/s Average min max 00:31:37.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1242.02 310.51 105749.15 67262.60 166354.84 00:31:37.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.28 150.57 222936.71 86311.75 329249.95 00:31:37.563 ======================================================== 00:31:37.563 Total : 1844.31 461.08 144018.37 67262.60 329249.95 00:31:37.563 00:31:37.563 01:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:37.563 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.823 No valid NVMe controllers or AIO or URING devices found 00:31:37.823 Initializing NVMe Controllers 00:31:37.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.823 Controller IO queue size 128, less than required. 00:31:37.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.823 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:37.823 Controller IO queue size 128, less than required. 00:31:37.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.823 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:37.823 WARNING: Some requested NVMe devices were skipped 00:31:37.823 01:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:37.823 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.360 Initializing NVMe Controllers 00:31:40.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.360 Controller IO queue size 128, less than required. 00:31:40.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:40.360 Controller IO queue size 128, less than required. 00:31:40.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:40.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:40.360 Initialization complete. Launching workers. 00:31:40.360 00:31:40.360 ==================== 00:31:40.360 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:40.360 TCP transport: 00:31:40.360 polls: 19627 00:31:40.360 idle_polls: 10235 00:31:40.360 sock_completions: 9392 00:31:40.360 nvme_completions: 9451 00:31:40.360 submitted_requests: 14140 00:31:40.360 queued_requests: 1 00:31:40.360 00:31:40.360 ==================== 00:31:40.360 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:40.360 TCP transport: 00:31:40.360 polls: 21806 00:31:40.360 idle_polls: 10962 00:31:40.360 sock_completions: 10844 00:31:40.360 nvme_completions: 5039 00:31:40.360 submitted_requests: 7540 00:31:40.360 queued_requests: 1 00:31:40.360 ======================================================== 00:31:40.360 Latency(us) 00:31:40.360 Device Information : IOPS MiB/s Average min max 00:31:40.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2362.38 590.60 54537.00 30125.79 89279.81 00:31:40.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1259.44 314.86 103160.11 47312.99 137527.35 00:31:40.360 ======================================================== 00:31:40.360 Total : 3621.82 905.46 71445.01 30125.79 137527.35 00:31:40.360 00:31:40.360 01:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:40.360 01:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.619 01:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:40.619 01:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:40.619 01:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c7a91962-784d-42d9-ab81-fa84356057eb 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c7a91962-784d-42d9-ab81-fa84356057eb 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=c7a91962-784d-42d9-ab81-fa84356057eb 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:31:41.559 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:41.819 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:41.819 { 00:31:41.819 "uuid": "c7a91962-784d-42d9-ab81-fa84356057eb", 00:31:41.819 "name": "lvs_0", 00:31:41.819 "base_bdev": "Nvme0n1", 00:31:41.819 "total_data_clusters": 457407, 00:31:41.819 "free_clusters": 457407, 00:31:41.819 "block_size": 512, 00:31:41.819 "cluster_size": 4194304 00:31:41.819 } 00:31:41.819 ]' 00:31:41.819 01:50:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="c7a91962-784d-42d9-ab81-fa84356057eb") .free_clusters' 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=457407 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c7a91962-784d-42d9-ab81-fa84356057eb") .cluster_size' 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=1829628 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 1829628 00:31:41.819 1829628 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:41.819 01:50:08 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7a91962-784d-42d9-ab81-fa84356057eb lbd_0 20480 00:31:42.079 01:50:08 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3652d1e4-19f6-48a4-9ea9-0bfabcdde862 00:31:42.079 01:50:08 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3652d1e4-19f6-48a4-9ea9-0bfabcdde862 lvs_n_0 00:31:43.988 01:50:09 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d0ab81f7-3469-4180-9d98-4a357762ad75 00:31:43.988 01:50:09 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d0ab81f7-3469-4180-9d98-4a357762ad75 00:31:43.988 01:50:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=d0ab81f7-3469-4180-9d98-4a357762ad75 00:31:43.988 01:50:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:43.989 01:50:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:31:43.989 01:50:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:31:43.989 01:50:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:43.989 { 00:31:43.989 "uuid": "c7a91962-784d-42d9-ab81-fa84356057eb", 00:31:43.989 "name": "lvs_0", 00:31:43.989 "base_bdev": "Nvme0n1", 00:31:43.989 "total_data_clusters": 457407, 00:31:43.989 "free_clusters": 452287, 00:31:43.989 "block_size": 512, 00:31:43.989 "cluster_size": 4194304 00:31:43.989 }, 00:31:43.989 { 00:31:43.989 "uuid": "d0ab81f7-3469-4180-9d98-4a357762ad75", 00:31:43.989 "name": "lvs_n_0", 00:31:43.989 "base_bdev": "3652d1e4-19f6-48a4-9ea9-0bfabcdde862", 00:31:43.989 "total_data_clusters": 5114, 00:31:43.989 "free_clusters": 5114, 00:31:43.989 "block_size": 512, 00:31:43.989 "cluster_size": 4194304 00:31:43.989 } 00:31:43.989 ]' 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d0ab81f7-3469-4180-9d98-4a357762ad75") .free_clusters' 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d0ab81f7-3469-4180-9d98-4a357762ad75") .cluster_size' 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:31:43.989 20456 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0ab81f7-3469-4180-9d98-4a357762ad75 lbd_nest_0 20456 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=62e3f9af-9d86-4a7e-b06c-93f4a445687c 00:31:43.989 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:44.249 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:44.249 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 62e3f9af-9d86-4a7e-b06c-93f4a445687c 00:31:44.508 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.508 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:44.508 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:44.508 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:44.508 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:44.508 01:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:44.508 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.728 Initializing NVMe Controllers 00:31:56.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.728 Initialization complete. Launching workers. 00:31:56.728 ======================================================== 00:31:56.728 Latency(us) 00:31:56.728 Device Information : IOPS MiB/s Average min max 00:31:56.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.00 0.02 21328.38 250.72 49193.46 00:31:56.728 ======================================================== 00:31:56.728 Total : 47.00 0.02 21328.38 250.72 49193.46 00:31:56.728 00:31:56.728 01:50:21 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:56.728 01:50:21 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.728 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.723 Initializing NVMe Controllers 00:32:06.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:06.723 Initialization complete. Launching workers. 00:32:06.723 ======================================================== 00:32:06.723 Latency(us) 00:32:06.723 Device Information : IOPS MiB/s Average min max 00:32:06.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 59.29 7.41 16880.75 6009.32 55867.05 00:32:06.723 ======================================================== 00:32:06.723 Total : 59.29 7.41 16880.75 6009.32 55867.05 00:32:06.723 00:32:06.723 01:50:31 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:06.723 01:50:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:06.723 01:50:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:06.723 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.716 Initializing NVMe Controllers 00:32:16.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:16.717 Initialization complete. Launching workers. 00:32:16.717 ======================================================== 00:32:16.717 Latency(us) 00:32:16.717 Device Information : IOPS MiB/s Average min max 00:32:16.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9191.92 4.49 3480.76 303.98 8682.55 00:32:16.717 ======================================================== 00:32:16.717 Total : 9191.92 4.49 3480.76 303.98 8682.55 00:32:16.717 00:32:16.717 01:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:16.717 01:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:16.717 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.706 Initializing NVMe Controllers 00:32:26.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:26.706 Initialization complete. Launching workers. 00:32:26.706 ======================================================== 00:32:26.706 Latency(us) 00:32:26.706 Device Information : IOPS MiB/s Average min max 00:32:26.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2814.41 351.80 11377.71 903.27 27188.07 00:32:26.706 ======================================================== 00:32:26.706 Total : 2814.41 351.80 11377.71 903.27 27188.07 00:32:26.706 00:32:26.706 01:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:26.706 01:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:26.706 01:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:26.706 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.700 Initializing NVMe Controllers 00:32:36.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:36.700 Controller IO queue size 128, less than required. 00:32:36.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:36.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:36.700 Initialization complete. Launching workers. 00:32:36.700 ======================================================== 00:32:36.700 Latency(us) 00:32:36.700 Device Information : IOPS MiB/s Average min max 00:32:36.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15929.33 7.78 8040.12 2104.86 16907.93 00:32:36.700 ======================================================== 00:32:36.700 Total : 15929.33 7.78 8040.12 2104.86 16907.93 00:32:36.700 00:32:36.700 01:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:36.700 01:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:36.700 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.778 Initializing NVMe Controllers 00:32:46.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.778 Controller IO queue size 128, less than required. 00:32:46.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:46.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:46.778 Initialization complete. Launching workers. 00:32:46.778 ======================================================== 00:32:46.778 Latency(us) 00:32:46.778 Device Information : IOPS MiB/s Average min max 00:32:46.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1182.91 147.86 109152.29 14980.32 250752.68 00:32:46.778 ======================================================== 00:32:46.778 Total : 1182.91 147.86 109152.29 14980.32 250752.68 00:32:46.778 00:32:46.778 01:51:13 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.038 01:51:13 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62e3f9af-9d86-4a7e-b06c-93f4a445687c 00:32:48.950 01:51:14 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:48.950 01:51:14 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3652d1e4-19f6-48a4-9ea9-0bfabcdde862 00:32:48.950 01:51:15 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:49.210 rmmod nvme_tcp 00:32:49.210 rmmod nvme_fabrics 00:32:49.210 rmmod nvme_keyring 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4158415 ']' 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4158415 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 4158415 ']' 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 4158415 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4158415 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4158415' 00:32:49.210 killing process with pid 4158415 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 4158415 00:32:49.210 01:51:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 4158415 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.124 01:51:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.670 01:51:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.670 00:32:53.670 real 1m32.723s 00:32:53.670 user 5m24.087s 00:32:53.670 sys 0m15.412s 00:32:53.670 01:51:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:53.670 01:51:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:53.670 ************************************ 00:32:53.670 END TEST nvmf_perf 00:32:53.670 ************************************ 00:32:53.670 01:51:19 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:53.670 01:51:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:53.670 01:51:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:53.670 01:51:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:53.670 ************************************ 00:32:53.670 START TEST nvmf_fio_host 00:32:53.670 ************************************ 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:53.670 * Looking for test storage... 00:32:53.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.670 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:53.671 01:51:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:01.818 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:01.818 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:01.818 Found net devices under 0000:31:00.0: cvl_0_0 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:01.818 Found net devices under 0000:31:00.1: cvl_0_1 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.818 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:01.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:33:01.819 00:33:01.819 --- 10.0.0.2 ping statistics --- 00:33:01.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.819 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:33:01.819 00:33:01.819 --- 10.0.0.1 ping statistics --- 00:33:01.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.819 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4179077 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4179077 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 4179077 ']' 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:01.819 01:51:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.819 [2024-07-12 01:51:27.900041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:01.819 [2024-07-12 01:51:27.900112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.819 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.819 [2024-07-12 01:51:27.979527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:01.819 [2024-07-12 01:51:28.019322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.819 [2024-07-12 01:51:28.019365] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.819 [2024-07-12 01:51:28.019374] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.819 [2024-07-12 01:51:28.019380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.819 [2024-07-12 01:51:28.019386] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.819 [2024-07-12 01:51:28.019525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.819 [2024-07-12 01:51:28.019639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.819 [2024-07-12 01:51:28.019795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.819 [2024-07-12 01:51:28.019796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:02.392 01:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:02.392 01:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:33:02.392 01:51:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:02.654 [2024-07-12 01:51:28.825370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.654 01:51:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:02.654 01:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.654 01:51:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.654 01:51:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:02.917 Malloc1 00:33:02.917 01:51:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.917 01:51:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:03.178 01:51:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.441 [2024-07-12 01:51:29.539000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:03.441 01:51:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:04.019 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:04.019 fio-3.35 00:33:04.019 Starting 1 thread 00:33:04.019 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.565 00:33:06.565 test: (groupid=0, jobs=1): err= 0: pid=4179795: Fri Jul 12 01:51:32 2024 00:33:06.565 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:33:06.565 slat (usec): min=2, max=302, avg= 2.17, stdev= 2.46 00:33:06.565 clat (usec): min=3779, max=8876, avg=5120.57, stdev=548.28 00:33:06.565 lat (usec): min=3814, max=8878, avg=5122.73, stdev=548.36 00:33:06.565 clat percentiles (usec): 00:33:06.565 | 1.00th=[ 4293], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:33:06.565 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:33:06.565 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5800], 00:33:06.565 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[ 8291], 99.95th=[ 8455], 00:33:06.565 | 99.99th=[ 8717] 00:33:06.565 bw ( KiB/s): min=50600, max=56512, per=99.95%, avg=54954.00, stdev=2903.91, samples=4 00:33:06.565 iops : min=12650, max=14128, avg=13738.50, stdev=725.98, samples=4 00:33:06.565 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec); 0 zone resets 00:33:06.565 slat (usec): min=2, max=269, avg= 2.26, stdev= 1.78 00:33:06.565 clat (usec): min=2896, max=7724, avg=4136.68, stdev=457.48 00:33:06.565 lat (usec): min=2914, max=7726, avg=4138.95, stdev=457.59 00:33:06.565 clat percentiles (usec): 00:33:06.565 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:33:06.565 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:33:06.565 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:33:06.566 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6718], 99.95th=[ 7046], 00:33:06.566 | 99.99th=[ 7570] 00:33:06.566 bw ( KiB/s): min=51032, max=56344, per=99.99%, avg=54886.00, stdev=2575.09, samples=4 00:33:06.566 iops : min=12758, max=14086, avg=13721.50, stdev=643.77, samples=4 00:33:06.566 lat (msec) : 4=19.66%, 10=80.34% 00:33:06.566 cpu : usr=73.89%, sys=23.27%, ctx=55, majf=0, minf=15 00:33:06.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:06.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:06.566 issued rwts: total=27546,27501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:06.566 00:33:06.566 Run status group 0 (all jobs): 00:33:06.566 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:33:06.566 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:06.566 01:51:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:06.566 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:06.566 fio-3.35 00:33:06.566 Starting 1 thread 00:33:06.566 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.116 00:33:09.116 test: (groupid=0, jobs=1): err= 0: pid=4180432: Fri Jul 12 01:51:35 2024 00:33:09.116 read: IOPS=9134, BW=143MiB/s (150MB/s)(286MiB/2003msec) 00:33:09.116 slat (usec): min=3, max=107, avg= 3.66, stdev= 1.58 00:33:09.116 clat (usec): min=1376, max=16100, avg=8551.92, stdev=2117.36 00:33:09.116 lat (usec): min=1380, max=16104, avg=8555.58, stdev=2117.51 00:33:09.116 clat percentiles (usec): 00:33:09.116 | 1.00th=[ 4359], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6718], 00:33:09.116 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:33:09.116 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11338], 95.00th=[11863], 00:33:09.116 | 99.00th=[13960], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:33:09.116 | 99.99th=[16057] 00:33:09.116 bw ( KiB/s): min=61312, max=87392, per=49.15%, avg=71832.00, stdev=11069.21, samples=4 00:33:09.116 iops : min= 3832, max= 5462, avg=4489.50, stdev=691.83, samples=4 00:33:09.116 write: IOPS=5348, BW=83.6MiB/s (87.6MB/s)(147MiB/1759msec); 0 zone resets 00:33:09.116 slat (usec): min=40, max=322, avg=41.09, stdev= 7.21 00:33:09.116 clat (usec): min=2649, max=17739, avg=9441.73, stdev=1636.17 00:33:09.116 lat (usec): min=2692, max=17779, avg=9482.83, stdev=1637.27 00:33:09.116 clat percentiles (usec): 00:33:09.116 | 1.00th=[ 6456], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8094], 00:33:09.116 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:09.116 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11469], 95.00th=[12387], 00:33:09.116 | 99.00th=[14353], 99.50th=[14877], 99.90th=[17171], 99.95th=[17433], 00:33:09.116 | 99.99th=[17695] 00:33:09.116 bw ( KiB/s): min=63456, max=91104, per=87.26%, avg=74672.00, stdev=11786.79, samples=4 00:33:09.116 iops : min= 3966, max= 5694, avg=4667.00, stdev=736.67, samples=4 00:33:09.116 lat (msec) : 2=0.04%, 4=0.37%, 10=72.00%, 20=27.58% 00:33:09.116 cpu : usr=83.07%, sys=14.59%, ctx=13, majf=0, minf=28 00:33:09.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:09.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:09.116 issued rwts: total=18297,9408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:09.116 00:33:09.116 Run status group 0 (all jobs): 00:33:09.116 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=286MiB (300MB), run=2003-2003msec 00:33:09.116 WRITE: bw=83.6MiB/s (87.6MB/s), 83.6MiB/s-83.6MiB/s (87.6MB/s-87.6MB/s), io=147MiB (154MB), run=1759-1759msec 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:33:09.116 01:51:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:09.688 Nvme0n1 00:33:09.688 01:51:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=283d8709-83db-419f-abc0-57d5c94cf4e1 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 283d8709-83db-419f-abc0-57d5c94cf4e1 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=283d8709-83db-419f-abc0-57d5c94cf4e1 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:10.259 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:33:10.259 { 00:33:10.259 "uuid": "283d8709-83db-419f-abc0-57d5c94cf4e1", 00:33:10.259 "name": "lvs_0", 00:33:10.259 "base_bdev": "Nvme0n1", 00:33:10.260 "total_data_clusters": 1787, 00:33:10.260 "free_clusters": 1787, 00:33:10.260 "block_size": 512, 00:33:10.260 "cluster_size": 1073741824 00:33:10.260 } 00:33:10.260 ]' 00:33:10.260 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="283d8709-83db-419f-abc0-57d5c94cf4e1") .free_clusters' 00:33:10.260 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1787 00:33:10.260 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="283d8709-83db-419f-abc0-57d5c94cf4e1") .cluster_size' 00:33:10.520 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:33:10.520 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1829888 00:33:10.520 01:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1829888 00:33:10.520 1829888 00:33:10.520 01:51:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:10.520 ad441343-d9be-42c0-9f08-34b2325647a3 00:33:10.520 01:51:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:10.781 01:51:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:11.043 01:51:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:11.624 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:11.624 fio-3.35 00:33:11.624 Starting 1 thread 00:33:11.624 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.170 00:33:14.170 test: (groupid=0, jobs=1): err= 0: pid=4181628: Fri Jul 12 01:51:40 2024 00:33:14.170 read: IOPS=10.2k, BW=39.7MiB/s (41.7MB/s)(79.7MiB/2005msec) 00:33:14.170 slat (usec): min=2, max=112, avg= 2.24, stdev= 1.10 00:33:14.170 clat (usec): min=2502, max=11905, avg=6939.70, stdev=526.60 00:33:14.170 lat (usec): min=2518, max=11907, avg=6941.93, stdev=526.55 00:33:14.170 clat percentiles (usec): 00:33:14.170 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:33:14.170 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:33:14.170 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:33:14.170 | 99.00th=[ 8094], 99.50th=[ 8291], 99.90th=[ 9765], 99.95th=[11207], 00:33:14.170 | 99.99th=[11863] 00:33:14.170 bw ( KiB/s): min=39656, max=41224, per=99.88%, avg=40638.00, stdev=696.85, samples=4 00:33:14.170 iops : min= 9914, max=10306, avg=10159.50, stdev=174.21, samples=4 00:33:14.170 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(79.8MiB/2005msec); 0 zone resets 00:33:14.170 slat (nsec): min=2133, max=99290, avg=2331.89, stdev=762.61 00:33:14.170 clat (usec): min=1153, max=9756, avg=5555.04, stdev=445.25 00:33:14.170 lat (usec): min=1160, max=9759, avg=5557.37, stdev=445.22 00:33:14.170 clat percentiles (usec): 00:33:14.170 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5211], 00:33:14.170 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:33:14.170 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6259], 00:33:14.170 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 8094], 99.95th=[ 8979], 00:33:14.170 | 99.99th=[ 9634] 00:33:14.170 bw ( KiB/s): min=40208, max=41088, per=100.00%, avg=40740.00, stdev=388.94, samples=4 00:33:14.170 iops : min=10052, max=10272, avg=10185.00, stdev=97.24, samples=4 00:33:14.170 lat (msec) : 2=0.02%, 4=0.11%, 10=99.82%, 20=0.05% 00:33:14.170 cpu : usr=70.41%, sys=27.40%, ctx=59, majf=0, minf=15 00:33:14.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:14.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:14.170 issued rwts: total=20395,20421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:14.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:14.170 00:33:14.170 Run status group 0 (all jobs): 00:33:14.170 READ: bw=39.7MiB/s (41.7MB/s), 39.7MiB/s-39.7MiB/s (41.7MB/s-41.7MB/s), io=79.7MiB (83.5MB), run=2005-2005msec 00:33:14.170 WRITE: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=79.8MiB (83.6MB), run=2005-2005msec 00:33:14.170 01:51:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:14.170 01:51:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:14.741 01:51:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=4abe48f0-0423-4a42-b326-31a95d173439 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 4abe48f0-0423-4a42-b326-31a95d173439 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=4abe48f0-0423-4a42-b326-31a95d173439 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:33:15.003 { 00:33:15.003 "uuid": "283d8709-83db-419f-abc0-57d5c94cf4e1", 00:33:15.003 "name": "lvs_0", 00:33:15.003 "base_bdev": "Nvme0n1", 00:33:15.003 "total_data_clusters": 1787, 00:33:15.003 "free_clusters": 0, 00:33:15.003 "block_size": 512, 00:33:15.003 "cluster_size": 1073741824 00:33:15.003 }, 00:33:15.003 { 00:33:15.003 "uuid": "4abe48f0-0423-4a42-b326-31a95d173439", 00:33:15.003 "name": "lvs_n_0", 00:33:15.003 "base_bdev": "ad441343-d9be-42c0-9f08-34b2325647a3", 00:33:15.003 "total_data_clusters": 457025, 00:33:15.003 "free_clusters": 457025, 00:33:15.003 "block_size": 512, 00:33:15.003 "cluster_size": 4194304 00:33:15.003 } 00:33:15.003 ]' 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="4abe48f0-0423-4a42-b326-31a95d173439") .free_clusters' 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=457025 00:33:15.003 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="4abe48f0-0423-4a42-b326-31a95d173439") .cluster_size' 00:33:15.264 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:33:15.264 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1828100 00:33:15.264 01:51:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1828100 00:33:15.264 1828100 00:33:15.264 01:51:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:16.208 d51c3922-9afc-4896-87e8-105f0d94801c 00:33:16.208 01:51:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:16.467 01:51:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:16.467 01:51:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:16.727 01:51:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.987 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:16.987 fio-3.35 00:33:16.987 Starting 1 thread 00:33:16.987 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.529 00:33:19.529 test: (groupid=0, jobs=1): err= 0: pid=4182805: Fri Jul 12 01:51:45 2024 00:33:19.529 read: IOPS=9322, BW=36.4MiB/s (38.2MB/s)(73.0MiB/2006msec) 00:33:19.529 slat (usec): min=2, max=112, avg= 2.23, stdev= 1.13 00:33:19.529 clat (usec): min=2093, max=12643, avg=7582.84, stdev=585.67 00:33:19.529 lat (usec): min=2111, max=12645, avg=7585.07, stdev=585.61 00:33:19.529 clat percentiles (usec): 00:33:19.529 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:33:19.529 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:33:19.529 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:33:19.529 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11731], 00:33:19.529 | 99.99th=[12518] 00:33:19.529 bw ( KiB/s): min=36064, max=37928, per=99.92%, avg=37260.00, stdev=819.93, samples=4 00:33:19.529 iops : min= 9016, max= 9482, avg=9315.00, stdev=204.98, samples=4 00:33:19.529 write: IOPS=9325, BW=36.4MiB/s (38.2MB/s)(73.1MiB/2006msec); 0 zone resets 00:33:19.529 slat (nsec): min=2146, max=108525, avg=2321.93, stdev=832.10 00:33:19.529 clat (usec): min=1366, max=11208, avg=6047.15, stdev=505.94 00:33:19.529 lat (usec): min=1374, max=11210, avg=6049.47, stdev=505.92 00:33:19.529 clat percentiles (usec): 00:33:19.529 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:33:19.529 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:33:19.529 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:33:19.529 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 9372], 99.95th=[10421], 00:33:19.529 | 99.99th=[11207] 00:33:19.529 bw ( KiB/s): min=36944, max=37632, per=99.99%, avg=37300.00, stdev=321.96, samples=4 00:33:19.529 iops : min= 9236, max= 9408, avg=9325.00, stdev=80.49, samples=4 00:33:19.529 lat (msec) : 2=0.01%, 4=0.11%, 10=99.78%, 20=0.11% 00:33:19.529 cpu : usr=70.17%, sys=27.73%, ctx=53, majf=0, minf=15 00:33:19.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:19.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:19.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:19.529 issued rwts: total=18700,18707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:19.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:19.529 00:33:19.529 Run status group 0 (all jobs): 00:33:19.529 READ: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=73.0MiB (76.6MB), run=2006-2006msec 00:33:19.529 WRITE: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=73.1MiB (76.6MB), run=2006-2006msec 00:33:19.529 01:51:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:19.790 01:51:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:19.790 01:51:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:21.700 01:51:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:21.961 01:51:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:22.531 01:51:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:22.531 01:51:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:25.067 01:51:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:25.067 01:51:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:25.068 rmmod nvme_tcp 00:33:25.068 rmmod nvme_fabrics 00:33:25.068 rmmod nvme_keyring 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4179077 ']' 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4179077 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 4179077 ']' 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 4179077 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4179077 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4179077' 00:33:25.068 killing process with pid 4179077 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 4179077 00:33:25.068 01:51:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 4179077 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.068 01:51:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.075 01:51:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.075 00:33:27.075 real 0m33.611s 00:33:27.075 user 2m34.793s 00:33:27.075 sys 0m10.456s 00:33:27.075 01:51:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:27.075 01:51:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.075 ************************************ 00:33:27.075 END TEST nvmf_fio_host 00:33:27.076 ************************************ 00:33:27.076 01:51:53 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:27.076 01:51:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:27.076 01:51:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:27.076 01:51:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.076 ************************************ 00:33:27.076 START TEST nvmf_failover 00:33:27.076 ************************************ 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:27.076 * Looking for test storage... 00:33:27.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:27.076 01:51:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:35.213 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:35.213 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:35.213 Found net devices under 0000:31:00.0: cvl_0_0 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:35.213 Found net devices under 0000:31:00.1: cvl_0_1 00:33:35.213 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:35.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:33:35.214 00:33:35.214 --- 10.0.0.2 ping statistics --- 00:33:35.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.214 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:33:35.214 00:33:35.214 --- 10.0.0.1 ping statistics --- 00:33:35.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.214 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4188819 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4188819 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4188819 ']' 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:35.214 01:52:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:35.214 [2024-07-12 01:52:01.537795] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:35.214 [2024-07-12 01:52:01.537861] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.474 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.474 [2024-07-12 01:52:01.632977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:35.474 [2024-07-12 01:52:01.684611] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.474 [2024-07-12 01:52:01.684670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.474 [2024-07-12 01:52:01.684678] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.474 [2024-07-12 01:52:01.684685] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.474 [2024-07-12 01:52:01.684691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.474 [2024-07-12 01:52:01.684830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:35.474 [2024-07-12 01:52:01.684996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.474 [2024-07-12 01:52:01.684997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.045 01:52:02 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:36.304 [2024-07-12 01:52:02.482454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.304 01:52:02 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:36.566 Malloc0 00:33:36.566 01:52:02 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:36.566 01:52:02 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:36.826 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.826 [2024-07-12 01:52:03.181658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.085 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:37.085 [2024-07-12 01:52:03.342103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:37.085 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:37.344 [2024-07-12 01:52:03.502614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:37.344 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4189183 00:33:37.344 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:37.344 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:37.344 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4189183 /var/tmp/bdevperf.sock 00:33:37.344 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4189183 ']' 00:33:37.345 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:37.345 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.345 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:37.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:37.345 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.345 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:37.604 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.604 01:52:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:33:37.604 01:52:03 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:37.864 NVMe0n1 00:33:37.864 01:52:04 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:38.125 00:33:38.125 01:52:04 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4189448 00:33:38.125 01:52:04 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:38.125 01:52:04 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:39.063 01:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.323 [2024-07-12 01:52:05.476653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.323 [2024-07-12 01:52:05.476780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 [2024-07-12 01:52:05.476918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1333e30 is same with the state(5) to be set 00:33:39.324 01:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:42.621 01:52:08 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.621 00:33:42.621 01:52:08 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:42.881 [2024-07-12 01:52:09.076402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.881 [2024-07-12 01:52:09.076499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 [2024-07-12 01:52:09.076645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13342f0 is same with the state(5) to be set 00:33:42.882 01:52:09 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:46.179 01:52:12 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.179 [2024-07-12 01:52:12.252720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.179 01:52:12 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:47.119 01:52:13 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:47.119 [2024-07-12 01:52:13.425041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.119 [2024-07-12 01:52:13.425113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 [2024-07-12 01:52:13.425181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da0e0 is same with the state(5) to be set 00:33:47.120 01:52:13 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4189448 00:33:53.713 0 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4189183 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4189183 ']' 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4189183 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4189183 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4189183' 00:33:53.713 killing process with pid 4189183 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4189183 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4189183 00:33:53.713 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:53.713 [2024-07-12 01:52:03.572465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:53.713 [2024-07-12 01:52:03.572519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4189183 ] 00:33:53.713 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.713 [2024-07-12 01:52:03.638543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.713 [2024-07-12 01:52:03.669252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.713 Running I/O for 15 seconds... 00:33:53.713 [2024-07-12 01:52:05.479272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.713 [2024-07-12 01:52:05.479830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.713 [2024-07-12 01:52:05.479839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.479991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.479998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.714 [2024-07-12 01:52:05.480338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.714 [2024-07-12 01:52:05.480364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98928 len:8 PRP1 0x0 PRP2 0x0 00:33:53.714 [2024-07-12 01:52:05.480372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.714 [2024-07-12 01:52:05.480416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.714 [2024-07-12 01:52:05.480431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.714 [2024-07-12 01:52:05.480446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.714 [2024-07-12 01:52:05.480462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17260 is same with the state(5) to be set 00:33:53.714 [2024-07-12 01:52:05.480621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.714 [2024-07-12 01:52:05.480629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.714 [2024-07-12 01:52:05.480635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:33:53.714 [2024-07-12 01:52:05.480643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.714 [2024-07-12 01:52:05.480656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.714 [2024-07-12 01:52:05.480662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98944 len:8 PRP1 0x0 PRP2 0x0 00:33:53.714 [2024-07-12 01:52:05.480669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.714 [2024-07-12 01:52:05.480682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.714 [2024-07-12 01:52:05.480688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98952 len:8 PRP1 0x0 PRP2 0x0 00:33:53.714 [2024-07-12 01:52:05.480697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.714 [2024-07-12 01:52:05.480705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.714 [2024-07-12 01:52:05.480710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.714 [2024-07-12 01:52:05.480716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98960 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98968 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98976 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98984 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.480980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.480987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.480993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.480999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98992 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99000 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99008 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99016 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99024 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99032 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99040 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99048 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.715 [2024-07-12 01:52:05.481343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.715 [2024-07-12 01:52:05.481348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.715 [2024-07-12 01:52:05.481354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:33:53.715 [2024-07-12 01:52:05.481361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99056 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99064 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99072 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99088 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99128 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.481742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.481748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.481755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.481762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98288 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98296 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98312 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98320 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.716 [2024-07-12 01:52:05.491721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.716 [2024-07-12 01:52:05.491726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.716 [2024-07-12 01:52:05.491732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:33:53.716 [2024-07-12 01:52:05.491739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.491976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.491981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.491987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98408 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.491994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98416 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98424 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98432 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98448 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98456 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98464 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98472 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.717 [2024-07-12 01:52:05.492215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98480 len:8 PRP1 0x0 PRP2 0x0 00:33:53.717 [2024-07-12 01:52:05.492222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.717 [2024-07-12 01:52:05.492239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.717 [2024-07-12 01:52:05.492245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98488 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98496 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98504 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98512 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98520 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98528 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98536 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98552 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98576 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98592 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98600 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98608 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98616 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98624 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98632 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98640 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98648 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98656 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.718 [2024-07-12 01:52:05.492804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.718 [2024-07-12 01:52:05.492810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98664 len:8 PRP1 0x0 PRP2 0x0 00:33:53.718 [2024-07-12 01:52:05.492817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.718 [2024-07-12 01:52:05.492824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98672 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.492850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98680 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.492876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98688 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.492901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98696 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.492926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98704 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.492951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98712 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.492975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.492980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.492987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98720 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.492995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.493002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.493007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.493013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.493020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.493030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.493036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.493042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.493049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.493056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.493061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.493067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98744 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.493073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.493080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.493086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.493091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.493098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.493105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.493110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.493116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.493123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.493130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.493136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.493142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98768 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98776 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98784 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98792 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98800 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98808 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98816 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98824 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98832 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98840 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98856 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.719 [2024-07-12 01:52:05.500774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.719 [2024-07-12 01:52:05.500780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:33:53.719 [2024-07-12 01:52:05.500787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.719 [2024-07-12 01:52:05.500794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98872 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98880 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98888 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98896 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98904 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98912 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98920 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.500973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.720 [2024-07-12 01:52:05.500978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.720 [2024-07-12 01:52:05.500984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98928 len:8 PRP1 0x0 PRP2 0x0 00:33:53.720 [2024-07-12 01:52:05.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:05.501028] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe383e0 was disconnected and freed. reset controller. 00:33:53.720 [2024-07-12 01:52:05.501037] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:53.720 [2024-07-12 01:52:05.501045] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.720 [2024-07-12 01:52:05.501089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17260 (9): Bad file descriptor 00:33:53.720 [2024-07-12 01:52:05.504646] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.720 [2024-07-12 01:52:05.553244] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:53.720 [2024-07-12 01:52:09.077190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.720 [2024-07-12 01:52:09.077668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.720 [2024-07-12 01:52:09.077675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.721 [2024-07-12 01:52:09.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.077985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.077994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.078001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.078009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.078016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.078025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.078032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.078041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.078048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.078056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.078063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.721 [2024-07-12 01:52:09.078072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.721 [2024-07-12 01:52:09.078079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.722 [2024-07-12 01:52:09.078720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.722 [2024-07-12 01:52:09.078727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.078985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.078994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.079001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.079018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.079034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.723 [2024-07-12 01:52:09.079050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.723 [2024-07-12 01:52:09.079305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.723 [2024-07-12 01:52:09.079331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.723 [2024-07-12 01:52:09.079338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44400 len:8 PRP1 0x0 PRP2 0x0 00:33:53.723 [2024-07-12 01:52:09.079345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079379] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3a590 was disconnected and freed. reset controller. 00:33:53.723 [2024-07-12 01:52:09.079389] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:53.723 [2024-07-12 01:52:09.079408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.723 [2024-07-12 01:52:09.079416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.723 [2024-07-12 01:52:09.079433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.723 [2024-07-12 01:52:09.079448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.723 [2024-07-12 01:52:09.079467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.723 [2024-07-12 01:52:09.079474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.723 [2024-07-12 01:52:09.083078] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.723 [2024-07-12 01:52:09.083103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17260 (9): Bad file descriptor 00:33:53.724 [2024-07-12 01:52:09.209629] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:53.724 [2024-07-12 01:52:13.425706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.724 [2024-07-12 01:52:13.425866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.425990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.425997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.724 [2024-07-12 01:52:13.426127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.724 [2024-07-12 01:52:13.426435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.724 [2024-07-12 01:52:13.426442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.426905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.426921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.426955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.426971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.426987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.426997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.427004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.725 [2024-07-12 01:52:13.427020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.427037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.427057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.427072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.427088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.427104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.725 [2024-07-12 01:52:13.427113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.725 [2024-07-12 01:52:13.427120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.726 [2024-07-12 01:52:13.427538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.726 [2024-07-12 01:52:13.427700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.726 [2024-07-12 01:52:13.427709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.727 [2024-07-12 01:52:13.427716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.727 [2024-07-12 01:52:13.427731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.727 [2024-07-12 01:52:13.427747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.727 [2024-07-12 01:52:13.427764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.727 [2024-07-12 01:52:13.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.727 [2024-07-12 01:52:13.427795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.727 [2024-07-12 01:52:13.427822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.727 [2024-07-12 01:52:13.427829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:33:53.727 [2024-07-12 01:52:13.427836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427874] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3a570 was disconnected and freed. reset controller. 00:33:53.727 [2024-07-12 01:52:13.427883] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:53.727 [2024-07-12 01:52:13.427902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.727 [2024-07-12 01:52:13.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.727 [2024-07-12 01:52:13.427925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.727 [2024-07-12 01:52:13.427940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.727 [2024-07-12 01:52:13.427955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.727 [2024-07-12 01:52:13.427962] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.727 [2024-07-12 01:52:13.431562] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.727 [2024-07-12 01:52:13.431588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17260 (9): Bad file descriptor 00:33:53.727 [2024-07-12 01:52:13.509190] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:53.727 00:33:53.727 Latency(us) 00:33:53.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.727 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:53.727 Verification LBA range: start 0x0 length 0x4000 00:33:53.727 NVMe0n1 : 15.01 11354.40 44.35 592.39 0.00 10686.77 535.89 28398.93 00:33:53.727 =================================================================================================================== 00:33:53.727 Total : 11354.40 44.35 592.39 0.00 10686.77 535.89 28398.93 00:33:53.727 Received shutdown signal, test time was about 15.000000 seconds 00:33:53.727 00:33:53.727 Latency(us) 00:33:53.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.727 =================================================================================================================== 00:33:53.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4192222 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4192222 /var/tmp/bdevperf.sock 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4192222 ']' 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:53.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:33:53.727 01:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:53.727 [2024-07-12 01:52:20.024823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:53.727 01:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:53.987 [2024-07-12 01:52:20.185216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:53.987 01:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.247 NVMe0n1 00:33:54.247 01:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.507 00:33:54.507 01:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.768 00:33:54.768 01:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:54.768 01:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:54.768 01:52:21 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:55.029 01:52:21 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:58.325 01:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:58.325 01:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:58.325 01:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4193219 00:33:58.325 01:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:58.325 01:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4193219 00:33:59.263 0 00:33:59.263 01:52:25 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:59.263 [2024-07-12 01:52:19.712169] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:59.263 [2024-07-12 01:52:19.712226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192222 ] 00:33:59.263 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.263 [2024-07-12 01:52:19.778195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.263 [2024-07-12 01:52:19.807248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.263 [2024-07-12 01:52:21.209352] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:59.263 [2024-07-12 01:52:21.209398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.263 [2024-07-12 01:52:21.209410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.263 [2024-07-12 01:52:21.209419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.263 [2024-07-12 01:52:21.209427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.263 [2024-07-12 01:52:21.209434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.263 [2024-07-12 01:52:21.209441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.263 [2024-07-12 01:52:21.209449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.263 [2024-07-12 01:52:21.209456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.263 [2024-07-12 01:52:21.209463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.263 [2024-07-12 01:52:21.209489] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.263 [2024-07-12 01:52:21.209504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba6260 (9): Bad file descriptor 00:33:59.263 [2024-07-12 01:52:21.230673] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:59.263 Running I/O for 1 seconds... 00:33:59.263 00:33:59.263 Latency(us) 00:33:59.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.263 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:59.263 Verification LBA range: start 0x0 length 0x4000 00:33:59.263 NVMe0n1 : 1.01 11219.26 43.83 0.00 0.00 11351.25 2252.80 11741.87 00:33:59.263 =================================================================================================================== 00:33:59.263 Total : 11219.26 43.83 0.00 0.00 11351.25 2252.80 11741.87 00:33:59.263 01:52:25 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:59.263 01:52:25 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:59.523 01:52:25 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.523 01:52:25 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:59.523 01:52:25 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:59.782 01:52:26 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:00.041 01:52:26 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4192222 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4192222 ']' 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4192222 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4192222 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4192222' 00:34:03.334 killing process with pid 4192222 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4192222 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4192222 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:03.334 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:03.594 rmmod nvme_tcp 00:34:03.594 rmmod nvme_fabrics 00:34:03.594 rmmod nvme_keyring 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4188819 ']' 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4188819 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4188819 ']' 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4188819 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4188819 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4188819' 00:34:03.594 killing process with pid 4188819 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4188819 00:34:03.594 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4188819 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.854 01:52:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.766 01:52:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:05.766 00:34:05.766 real 0m38.766s 00:34:05.766 user 1m55.266s 00:34:05.766 sys 0m8.810s 00:34:05.766 01:52:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:05.766 01:52:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:05.766 ************************************ 00:34:05.766 END TEST nvmf_failover 00:34:05.766 ************************************ 00:34:05.766 01:52:32 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:05.766 01:52:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:05.766 01:52:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:05.766 01:52:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:05.766 ************************************ 00:34:05.766 START TEST nvmf_host_discovery 00:34:05.766 ************************************ 00:34:05.766 01:52:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:06.027 * Looking for test storage... 00:34:06.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.027 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:34:06.028 01:52:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:14.168 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:14.168 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.168 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:14.169 Found net devices under 0000:31:00.0: cvl_0_0 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:14.169 Found net devices under 0000:31:00.1: cvl_0_1 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:14.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:34:14.169 00:34:14.169 --- 10.0.0.2 ping statistics --- 00:34:14.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.169 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:34:14.169 00:34:14.169 --- 10.0.0.1 ping statistics --- 00:34:14.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.169 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=5740 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 5740 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 5740 ']' 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:14.169 01:52:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.169 [2024-07-12 01:52:40.508666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:14.169 [2024-07-12 01:52:40.508716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.473 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.473 [2024-07-12 01:52:40.600303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.473 [2024-07-12 01:52:40.635073] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.473 [2024-07-12 01:52:40.635118] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.473 [2024-07-12 01:52:40.635126] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.473 [2024-07-12 01:52:40.635132] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.473 [2024-07-12 01:52:40.635138] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.473 [2024-07-12 01:52:40.635168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 [2024-07-12 01:52:41.333593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 [2024-07-12 01:52:41.345847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 null0 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 null1 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=5862 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 5862 /tmp/host.sock 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 5862 ']' 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:15.117 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:15.117 01:52:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.117 [2024-07-12 01:52:41.440761] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:15.117 [2024-07-12 01:52:41.440822] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5862 ] 00:34:15.384 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.384 [2024-07-12 01:52:41.511590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.384 [2024-07-12 01:52:41.550957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:15.955 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.217 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.479 [2024-07-12 01:52:42.576929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:16.479 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:34:16.480 01:52:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:34:17.051 [2024-07-12 01:52:43.232298] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:17.051 [2024-07-12 01:52:43.232318] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:17.051 [2024-07-12 01:52:43.232336] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:17.051 [2024-07-12 01:52:43.320606] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:17.312 [2024-07-12 01:52:43.423049] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:17.312 [2024-07-12 01:52:43.423068] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:17.574 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.835 01:52:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.835 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.097 [2024-07-12 01:52:44.349590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:18.097 [2024-07-12 01:52:44.350787] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:18.097 [2024-07-12 01:52:44.350814] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:18.097 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.359 [2024-07-12 01:52:44.479596] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:18.359 01:52:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:34:18.359 [2024-07-12 01:52:44.578332] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:18.359 [2024-07-12 01:52:44.578348] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:18.359 [2024-07-12 01:52:44.578354] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.301 [2024-07-12 01:52:45.629399] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:19.301 [2024-07-12 01:52:45.629419] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:19.301 [2024-07-12 01:52:45.635796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.301 [2024-07-12 01:52:45.635815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.301 [2024-07-12 01:52:45.635825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.301 [2024-07-12 01:52:45.635832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.301 [2024-07-12 01:52:45.635840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.301 [2024-07-12 01:52:45.635847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.301 [2024-07-12 01:52:45.635855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.301 [2024-07-12 01:52:45.635862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.301 [2024-07-12 01:52:45.635869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.301 [2024-07-12 01:52:45.645810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.301 [2024-07-12 01:52:45.655850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.301 [2024-07-12 01:52:45.656187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.301 [2024-07-12 01:52:45.656201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.301 [2024-07-12 01:52:45.656209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.301 [2024-07-12 01:52:45.656220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.301 [2024-07-12 01:52:45.656235] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.301 [2024-07-12 01:52:45.656242] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.301 [2024-07-12 01:52:45.656250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.301 [2024-07-12 01:52:45.656261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.301 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.562 [2024-07-12 01:52:45.665904] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.562 [2024-07-12 01:52:45.666200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-12 01:52:45.666211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.562 [2024-07-12 01:52:45.666218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.562 [2024-07-12 01:52:45.666234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.562 [2024-07-12 01:52:45.666244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.562 [2024-07-12 01:52:45.666251] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.562 [2024-07-12 01:52:45.666257] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.563 [2024-07-12 01:52:45.666268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.563 [2024-07-12 01:52:45.675954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.563 [2024-07-12 01:52:45.676366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-12 01:52:45.676379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.563 [2024-07-12 01:52:45.676386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.563 [2024-07-12 01:52:45.676397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.563 [2024-07-12 01:52:45.676407] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.563 [2024-07-12 01:52:45.676413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.563 [2024-07-12 01:52:45.676425] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.563 [2024-07-12 01:52:45.676436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.563 [2024-07-12 01:52:45.686009] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.563 [2024-07-12 01:52:45.686467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-12 01:52:45.686504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.563 [2024-07-12 01:52:45.686516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.563 [2024-07-12 01:52:45.686535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.563 [2024-07-12 01:52:45.686547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.563 [2024-07-12 01:52:45.686554] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.563 [2024-07-12 01:52:45.686561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.563 [2024-07-12 01:52:45.686576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:19.563 [2024-07-12 01:52:45.696060] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:19.563 [2024-07-12 01:52:45.696471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-12 01:52:45.696508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.563 [2024-07-12 01:52:45.696520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.563 [2024-07-12 01:52:45.696538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.563 [2024-07-12 01:52:45.696550] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.563 [2024-07-12 01:52:45.696565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.563 [2024-07-12 01:52:45.696573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.563 [2024-07-12 01:52:45.696588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 [2024-07-12 01:52:45.706116] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.563 [2024-07-12 01:52:45.706624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-12 01:52:45.706661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.563 [2024-07-12 01:52:45.706671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.563 [2024-07-12 01:52:45.706690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.563 [2024-07-12 01:52:45.706741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.563 [2024-07-12 01:52:45.706750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.563 [2024-07-12 01:52:45.706759] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.563 [2024-07-12 01:52:45.706773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.563 [2024-07-12 01:52:45.716173] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.563 [2024-07-12 01:52:45.716636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-12 01:52:45.716674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e18870 with addr=10.0.0.2, port=4420 00:34:19.563 [2024-07-12 01:52:45.716686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18870 is same with the state(5) to be set 00:34:19.563 [2024-07-12 01:52:45.716706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e18870 (9): Bad file descriptor 00:34:19.563 [2024-07-12 01:52:45.716719] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:19.563 [2024-07-12 01:52:45.716727] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:19.563 [2024-07-12 01:52:45.716736] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:19.563 [2024-07-12 01:52:45.716752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.563 [2024-07-12 01:52:45.716790] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:19.563 [2024-07-12 01:52:45.716807] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:19.563 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.564 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.564 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.564 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.564 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.564 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.564 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:19.824 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.825 01:52:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.825 01:52:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.769 [2024-07-12 01:52:47.040380] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:20.769 [2024-07-12 01:52:47.040397] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:20.769 [2024-07-12 01:52:47.040409] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:21.029 [2024-07-12 01:52:47.128671] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:21.291 [2024-07-12 01:52:47.401263] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:21.291 [2024-07-12 01:52:47.401294] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.291 request: 00:34:21.291 { 00:34:21.291 "name": "nvme", 00:34:21.291 "trtype": "tcp", 00:34:21.291 "traddr": "10.0.0.2", 00:34:21.291 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:21.291 "adrfam": "ipv4", 00:34:21.291 "trsvcid": "8009", 00:34:21.291 "wait_for_attach": true, 00:34:21.291 "method": "bdev_nvme_start_discovery", 00:34:21.291 "req_id": 1 00:34:21.291 } 00:34:21.291 Got JSON-RPC error response 00:34:21.291 response: 00:34:21.291 { 00:34:21.291 "code": -17, 00:34:21.291 "message": "File exists" 00:34:21.291 } 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.291 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.292 request: 00:34:21.292 { 00:34:21.292 "name": "nvme_second", 00:34:21.292 "trtype": "tcp", 00:34:21.292 "traddr": "10.0.0.2", 00:34:21.292 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:21.292 "adrfam": "ipv4", 00:34:21.292 "trsvcid": "8009", 00:34:21.292 "wait_for_attach": true, 00:34:21.292 "method": "bdev_nvme_start_discovery", 00:34:21.292 "req_id": 1 00:34:21.292 } 00:34:21.292 Got JSON-RPC error response 00:34:21.292 response: 00:34:21.292 { 00:34:21.292 "code": -17, 00:34:21.292 "message": "File exists" 00:34:21.292 } 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.292 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.554 01:52:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.499 [2024-07-12 01:52:48.664780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.499 [2024-07-12 01:52:48.664808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4ac60 with addr=10.0.0.2, port=8010 00:34:22.499 [2024-07-12 01:52:48.664821] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:22.499 [2024-07-12 01:52:48.664828] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:22.499 [2024-07-12 01:52:48.664835] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:23.442 [2024-07-12 01:52:49.667086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.442 [2024-07-12 01:52:49.667108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4ac60 with addr=10.0.0.2, port=8010 00:34:23.442 [2024-07-12 01:52:49.667118] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:23.442 [2024-07-12 01:52:49.667125] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:23.442 [2024-07-12 01:52:49.667131] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:24.384 [2024-07-12 01:52:50.669122] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:24.384 request: 00:34:24.384 { 00:34:24.384 "name": "nvme_second", 00:34:24.384 "trtype": "tcp", 00:34:24.384 "traddr": "10.0.0.2", 00:34:24.384 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:24.384 "adrfam": "ipv4", 00:34:24.384 "trsvcid": "8010", 00:34:24.384 "attach_timeout_ms": 3000, 00:34:24.384 "method": "bdev_nvme_start_discovery", 00:34:24.384 "req_id": 1 00:34:24.384 } 00:34:24.384 Got JSON-RPC error response 00:34:24.384 response: 00:34:24.384 { 00:34:24.384 "code": -110, 00:34:24.384 "message": "Connection timed out" 00:34:24.384 } 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 5862 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:24.384 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:24.645 rmmod nvme_tcp 00:34:24.645 rmmod nvme_fabrics 00:34:24.645 rmmod nvme_keyring 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 5740 ']' 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 5740 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 5740 ']' 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 5740 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 5740 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 5740' 00:34:24.645 killing process with pid 5740 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 5740 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 5740 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.645 01:52:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.194 01:52:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:27.194 00:34:27.194 real 0m20.930s 00:34:27.194 user 0m23.864s 00:34:27.194 sys 0m7.484s 00:34:27.194 01:52:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:27.194 01:52:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.195 ************************************ 00:34:27.195 END TEST nvmf_host_discovery 00:34:27.195 ************************************ 00:34:27.195 01:52:53 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:27.195 01:52:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:27.195 01:52:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:27.195 01:52:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.195 ************************************ 00:34:27.195 START TEST nvmf_host_multipath_status 00:34:27.195 ************************************ 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:27.195 * Looking for test storage... 00:34:27.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:27.195 01:52:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:35.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:35.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.343 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:35.344 Found net devices under 0000:31:00.0: cvl_0_0 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:35.344 Found net devices under 0000:31:00.1: cvl_0_1 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:35.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:34:35.344 00:34:35.344 --- 10.0.0.2 ping statistics --- 00:34:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.344 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:34:35.344 00:34:35.344 --- 10.0.0.1 ping statistics --- 00:34:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.344 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=12516 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 12516 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 12516 ']' 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.344 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:35.345 01:53:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.345 [2024-07-12 01:53:01.437935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:35.345 [2024-07-12 01:53:01.437989] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.345 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.345 [2024-07-12 01:53:01.510438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.345 [2024-07-12 01:53:01.541670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.345 [2024-07-12 01:53:01.541707] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.345 [2024-07-12 01:53:01.541715] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.345 [2024-07-12 01:53:01.541721] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.345 [2024-07-12 01:53:01.541726] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.345 [2024-07-12 01:53:01.541871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.345 [2024-07-12 01:53:01.541871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=12516 00:34:35.922 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:36.182 [2024-07-12 01:53:02.358973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.182 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:36.182 Malloc0 00:34:36.442 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:36.442 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:36.701 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.701 [2024-07-12 01:53:02.976938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.701 01:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:36.961 [2024-07-12 01:53:03.129341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=12870 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 12870 /var/tmp/bdevperf.sock 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 12870 ']' 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:36.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:36.961 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:37.220 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:37.220 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:34:37.220 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:37.220 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:37.791 Nvme0n1 00:34:37.791 01:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:38.052 Nvme0n1 00:34:38.052 01:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:38.052 01:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:39.965 01:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:39.965 01:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:40.226 01:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:40.487 01:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:41.427 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:41.427 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:41.427 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.427 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.688 01:53:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:41.949 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.949 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:41.949 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.949 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:42.209 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.470 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.470 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:42.470 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:42.470 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:42.731 01:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:43.672 01:53:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:43.672 01:53:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:43.672 01:53:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.672 01:53:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:43.933 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:43.933 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:43.933 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.933 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.194 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.454 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.454 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:44.454 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.454 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.714 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.714 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:44.714 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.714 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:44.714 01:53:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.714 01:53:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:44.714 01:53:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:44.975 01:53:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:45.236 01:53:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.178 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.440 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:46.440 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.440 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.440 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.700 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.701 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.701 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.701 01:53:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:46.701 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.701 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:46.701 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.701 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:46.960 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.960 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:46.960 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.960 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:47.220 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.220 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:47.220 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:47.220 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:47.480 01:53:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:48.423 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:48.423 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:48.423 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.423 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:48.684 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.684 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:48.684 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:48.684 01:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.945 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:49.206 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.467 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.467 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:49.467 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:49.728 01:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:49.728 01:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:51.112 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:51.112 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.113 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.389 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.716 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.716 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:51.716 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.716 01:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.716 01:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.716 01:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:51.716 01:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:51.993 01:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:52.255 01:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.198 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.458 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.458 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:53.458 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.458 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.719 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.719 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.719 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.719 01:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.719 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.719 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:53.719 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.719 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.979 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.979 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.979 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.980 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:54.239 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.239 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:54.239 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:54.239 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:54.499 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:54.759 01:53:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:55.699 01:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:55.699 01:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:55.699 01:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.699 01:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.959 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.218 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:56.478 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.478 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:56.478 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.478 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.738 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.738 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:56.738 01:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:56.738 01:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:56.997 01:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:57.934 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:57.934 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:57.934 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.934 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:58.194 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.194 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:58.194 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.194 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:58.194 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.194 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:58.455 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.455 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.455 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.455 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.455 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.455 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.715 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.715 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.715 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.715 01:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.715 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.715 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:58.715 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.715 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.974 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.974 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:58.974 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:59.233 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:59.233 01:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.613 01:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.873 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.873 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.873 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.873 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.162 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:01.422 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.422 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:01.422 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:01.422 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:01.689 01:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:02.627 01:53:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:02.627 01:53:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:02.627 01:53:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.627 01:53:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.888 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.888 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:02.888 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.888 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.146 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.405 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.405 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:03.405 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.405 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 12870 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 12870 ']' 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 12870 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 12870 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 12870' 00:35:03.664 killing process with pid 12870 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 12870 00:35:03.664 01:53:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 12870 00:35:03.927 Connection closed with partial response: 00:35:03.927 00:35:03.927 00:35:03.927 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 12870 00:35:03.927 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:03.927 [2024-07-12 01:53:03.189160] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:03.927 [2024-07-12 01:53:03.189218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12870 ] 00:35:03.927 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.927 [2024-07-12 01:53:03.245639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.927 [2024-07-12 01:53:03.273752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:03.927 Running I/O for 90 seconds... 00:35:03.927 [2024-07-12 01:53:15.872938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.872971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.873095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.873100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:03.927 [2024-07-12 01:53:15.874432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.927 [2024-07-12 01:53:15.874437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.874985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.874990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.928 [2024-07-12 01:53:15.875382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.928 [2024-07-12 01:53:15.875765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:03.928 [2024-07-12 01:53:15.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.875984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.875998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:15.876300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:15.876647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:15.876651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.898148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.898190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.898208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.898224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.898245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.898262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.898978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.898991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.898996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.929 [2024-07-12 01:53:27.899169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:03.929 [2024-07-12 01:53:27.899226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.929 [2024-07-12 01:53:27.899236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:03.929 Received shutdown signal, test time was about 25.584790 seconds 00:35:03.929 00:35:03.929 Latency(us) 00:35:03.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.929 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:03.929 Verification LBA range: start 0x0 length 0x4000 00:35:03.929 Nvme0n1 : 25.58 10874.34 42.48 0.00 0.00 11753.07 249.17 3019898.88 00:35:03.929 =================================================================================================================== 00:35:03.929 Total : 10874.34 42.48 0.00 0.00 11753.07 249.17 3019898.88 00:35:03.929 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.929 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:03.929 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:03.929 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:03.929 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:03.929 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:04.190 rmmod nvme_tcp 00:35:04.190 rmmod nvme_fabrics 00:35:04.190 rmmod nvme_keyring 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 12516 ']' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 12516 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 12516 ']' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 12516 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 12516 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 12516' 00:35:04.190 killing process with pid 12516 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 12516 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 12516 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.190 01:53:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.734 01:53:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:06.734 00:35:06.734 real 0m39.485s 00:35:06.734 user 1m38.820s 00:35:06.734 sys 0m11.519s 00:35:06.734 01:53:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:06.734 01:53:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:06.734 ************************************ 00:35:06.734 END TEST nvmf_host_multipath_status 00:35:06.734 ************************************ 00:35:06.734 01:53:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:06.734 01:53:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:06.734 01:53:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.734 01:53:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.734 ************************************ 00:35:06.734 START TEST nvmf_discovery_remove_ifc 00:35:06.734 ************************************ 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:06.734 * Looking for test storage... 00:35:06.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.734 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.735 01:53:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.861 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:14.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:14.862 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:14.862 Found net devices under 0000:31:00.0: cvl_0_0 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:14.862 Found net devices under 0000:31:00.1: cvl_0_1 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:14.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:14.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:35:14.862 00:35:14.862 --- 10.0.0.2 ping statistics --- 00:35:14.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.862 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:14.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:14.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:35:14.862 00:35:14.862 --- 10.0.0.1 ping statistics --- 00:35:14.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.862 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=22838 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 22838 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 22838 ']' 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:14.862 01:53:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.862 [2024-07-12 01:53:40.814666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:14.862 [2024-07-12 01:53:40.814720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.862 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.862 [2024-07-12 01:53:40.909993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.862 [2024-07-12 01:53:40.955759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.862 [2024-07-12 01:53:40.955821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.862 [2024-07-12 01:53:40.955829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.862 [2024-07-12 01:53:40.955836] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.862 [2024-07-12 01:53:40.955842] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.862 [2024-07-12 01:53:40.955869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.431 [2024-07-12 01:53:41.633138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.431 [2024-07-12 01:53:41.641305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:15.431 null0 00:35:15.431 [2024-07-12 01:53:41.673297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=22951 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 22951 /tmp/host.sock 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 22951 ']' 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:15.431 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:15.431 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.431 [2024-07-12 01:53:41.726114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:15.431 [2024-07-12 01:53:41.726160] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid22951 ] 00:35:15.431 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.690 [2024-07-12 01:53:41.789417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.690 [2024-07-12 01:53:41.820271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.690 01:53:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.071 [2024-07-12 01:53:42.992406] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:17.071 [2024-07-12 01:53:42.992426] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:17.071 [2024-07-12 01:53:42.992439] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:17.071 [2024-07-12 01:53:43.081731] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:17.071 [2024-07-12 01:53:43.263504] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:17.071 [2024-07-12 01:53:43.263555] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:17.071 [2024-07-12 01:53:43.263577] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:17.071 [2024-07-12 01:53:43.263590] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:17.071 [2024-07-12 01:53:43.263609] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.071 [2024-07-12 01:53:43.269499] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cd91f0 was disconnected and freed. delete nvme_qpair. 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:17.071 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:17.331 01:53:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:18.269 01:53:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:19.651 01:53:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:20.589 01:53:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:21.528 01:53:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:22.473 [2024-07-12 01:53:48.703999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:22.473 [2024-07-12 01:53:48.704036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.473 [2024-07-12 01:53:48.704047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.473 [2024-07-12 01:53:48.704056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.473 [2024-07-12 01:53:48.704064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.473 [2024-07-12 01:53:48.704072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.473 [2024-07-12 01:53:48.704079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.473 [2024-07-12 01:53:48.704086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.473 [2024-07-12 01:53:48.704093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.473 [2024-07-12 01:53:48.704102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.473 [2024-07-12 01:53:48.704109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.473 [2024-07-12 01:53:48.704116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca0360 is same with the state(5) to be set 00:35:22.473 [2024-07-12 01:53:48.714021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca0360 (9): Bad file descriptor 00:35:22.473 [2024-07-12 01:53:48.724060] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:22.473 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:22.474 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:22.474 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.474 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:22.474 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.474 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.474 01:53:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:23.856 [2024-07-12 01:53:49.777268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:23.856 [2024-07-12 01:53:49.777312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca0360 with addr=10.0.0.2, port=4420 00:35:23.856 [2024-07-12 01:53:49.777324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca0360 is same with the state(5) to be set 00:35:23.856 [2024-07-12 01:53:49.777349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca0360 (9): Bad file descriptor 00:35:23.856 [2024-07-12 01:53:49.777689] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:23.856 [2024-07-12 01:53:49.777707] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.856 [2024-07-12 01:53:49.777714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.856 [2024-07-12 01:53:49.777722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.856 [2024-07-12 01:53:49.777738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.856 [2024-07-12 01:53:49.777746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.856 01:53:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.856 01:53:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:23.856 01:53:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:24.427 [2024-07-12 01:53:50.780120] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:24.427 [2024-07-12 01:53:50.780144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:24.427 [2024-07-12 01:53:50.780152] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:24.427 [2024-07-12 01:53:50.780159] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:24.427 [2024-07-12 01:53:50.780173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.427 [2024-07-12 01:53:50.780192] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:24.427 [2024-07-12 01:53:50.780215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.427 [2024-07-12 01:53:50.780225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.427 [2024-07-12 01:53:50.780239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.427 [2024-07-12 01:53:50.780247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.427 [2024-07-12 01:53:50.780256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.427 [2024-07-12 01:53:50.780268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.427 [2024-07-12 01:53:50.780276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.427 [2024-07-12 01:53:50.780283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.427 [2024-07-12 01:53:50.780291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.427 [2024-07-12 01:53:50.780299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.427 [2024-07-12 01:53:50.780306] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:24.427 [2024-07-12 01:53:50.780929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f7f0 (9): Bad file descriptor 00:35:24.427 [2024-07-12 01:53:50.781942] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:24.427 [2024-07-12 01:53:50.781952] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:24.688 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:24.688 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.688 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:24.688 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:24.689 01:53:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.689 01:53:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:24.689 01:53:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:26.074 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:26.074 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.074 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:26.074 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.075 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.075 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:26.075 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:26.075 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.075 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:26.075 01:53:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:26.645 [2024-07-12 01:53:52.836267] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:26.645 [2024-07-12 01:53:52.836284] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:26.645 [2024-07-12 01:53:52.836297] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:26.645 [2024-07-12 01:53:52.966727] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:26.906 01:53:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:26.906 [2024-07-12 01:53:53.147985] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:26.906 [2024-07-12 01:53:53.148025] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:26.906 [2024-07-12 01:53:53.148046] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:26.906 [2024-07-12 01:53:53.148059] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:26.906 [2024-07-12 01:53:53.148067] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:26.906 [2024-07-12 01:53:53.153641] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cae120 was disconnected and freed. delete nvme_qpair. 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 22951 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 22951 ']' 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 22951 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:27.848 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 22951 00:35:28.108 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:28.108 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 22951' 00:35:28.109 killing process with pid 22951 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 22951 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 22951 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:28.109 rmmod nvme_tcp 00:35:28.109 rmmod nvme_fabrics 00:35:28.109 rmmod nvme_keyring 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 22838 ']' 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 22838 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 22838 ']' 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 22838 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:28.109 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 22838 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 22838' 00:35:28.370 killing process with pid 22838 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 22838 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 22838 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:28.370 01:53:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.379 01:53:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:30.379 00:35:30.379 real 0m23.986s 00:35:30.379 user 0m28.173s 00:35:30.379 sys 0m6.998s 00:35:30.379 01:53:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:30.379 01:53:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.379 ************************************ 00:35:30.379 END TEST nvmf_discovery_remove_ifc 00:35:30.379 ************************************ 00:35:30.379 01:53:56 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:30.379 01:53:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:30.379 01:53:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:30.379 01:53:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.640 ************************************ 00:35:30.640 START TEST nvmf_identify_kernel_target 00:35:30.640 ************************************ 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:30.640 * Looking for test storage... 00:35:30.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.640 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:30.641 01:53:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:38.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:38.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.787 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:38.788 Found net devices under 0000:31:00.0: cvl_0_0 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:38.788 Found net devices under 0000:31:00.1: cvl_0_1 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:38.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:35:38.788 00:35:38.788 --- 10.0.0.2 ping statistics --- 00:35:38.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.788 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:35:38.788 01:54:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.452 ms 00:35:38.788 00:35:38.788 --- 10.0.0.1 ping statistics --- 00:35:38.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.788 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:38.788 01:54:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:42.089 Waiting for block devices as requested 00:35:42.089 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:42.350 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:42.350 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:42.350 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:42.609 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:42.609 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:42.610 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:42.610 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:42.872 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:42.872 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:42.872 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:43.132 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:43.132 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:43.132 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:43.132 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:43.393 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:43.393 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:43.393 No valid GPT data, bailing 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:43.393 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:43.656 00:35:43.656 Discovery Log Number of Records 2, Generation counter 2 00:35:43.656 =====Discovery Log Entry 0====== 00:35:43.656 trtype: tcp 00:35:43.656 adrfam: ipv4 00:35:43.656 subtype: current discovery subsystem 00:35:43.656 treq: not specified, sq flow control disable supported 00:35:43.656 portid: 1 00:35:43.656 trsvcid: 4420 00:35:43.656 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:43.656 traddr: 10.0.0.1 00:35:43.656 eflags: none 00:35:43.656 sectype: none 00:35:43.656 =====Discovery Log Entry 1====== 00:35:43.656 trtype: tcp 00:35:43.656 adrfam: ipv4 00:35:43.656 subtype: nvme subsystem 00:35:43.656 treq: not specified, sq flow control disable supported 00:35:43.656 portid: 1 00:35:43.656 trsvcid: 4420 00:35:43.656 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:43.656 traddr: 10.0.0.1 00:35:43.656 eflags: none 00:35:43.656 sectype: none 00:35:43.656 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:43.656 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:43.656 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.656 ===================================================== 00:35:43.656 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:43.656 ===================================================== 00:35:43.656 Controller Capabilities/Features 00:35:43.656 ================================ 00:35:43.656 Vendor ID: 0000 00:35:43.656 Subsystem Vendor ID: 0000 00:35:43.656 Serial Number: 31e625ea93de07651a17 00:35:43.656 Model Number: Linux 00:35:43.656 Firmware Version: 6.7.0-68 00:35:43.657 Recommended Arb Burst: 0 00:35:43.657 IEEE OUI Identifier: 00 00 00 00:35:43.657 Multi-path I/O 00:35:43.657 May have multiple subsystem ports: No 00:35:43.657 May have multiple controllers: No 00:35:43.657 Associated with SR-IOV VF: No 00:35:43.657 Max Data Transfer Size: Unlimited 00:35:43.657 Max Number of Namespaces: 0 00:35:43.657 Max Number of I/O Queues: 1024 00:35:43.657 NVMe Specification Version (VS): 1.3 00:35:43.657 NVMe Specification Version (Identify): 1.3 00:35:43.657 Maximum Queue Entries: 1024 00:35:43.657 Contiguous Queues Required: No 00:35:43.657 Arbitration Mechanisms Supported 00:35:43.657 Weighted Round Robin: Not Supported 00:35:43.657 Vendor Specific: Not Supported 00:35:43.657 Reset Timeout: 7500 ms 00:35:43.657 Doorbell Stride: 4 bytes 00:35:43.657 NVM Subsystem Reset: Not Supported 00:35:43.657 Command Sets Supported 00:35:43.657 NVM Command Set: Supported 00:35:43.657 Boot Partition: Not Supported 00:35:43.657 Memory Page Size Minimum: 4096 bytes 00:35:43.657 Memory Page Size Maximum: 4096 bytes 00:35:43.657 Persistent Memory Region: Not Supported 00:35:43.657 Optional Asynchronous Events Supported 00:35:43.657 Namespace Attribute Notices: Not Supported 00:35:43.657 Firmware Activation Notices: Not Supported 00:35:43.657 ANA Change Notices: Not Supported 00:35:43.657 PLE Aggregate Log Change Notices: Not Supported 00:35:43.657 LBA Status Info Alert Notices: Not Supported 00:35:43.657 EGE Aggregate Log Change Notices: Not Supported 00:35:43.657 Normal NVM Subsystem Shutdown event: Not Supported 00:35:43.657 Zone Descriptor Change Notices: Not Supported 00:35:43.657 Discovery Log Change Notices: Supported 00:35:43.657 Controller Attributes 00:35:43.657 128-bit Host Identifier: Not Supported 00:35:43.657 Non-Operational Permissive Mode: Not Supported 00:35:43.657 NVM Sets: Not Supported 00:35:43.657 Read Recovery Levels: Not Supported 00:35:43.657 Endurance Groups: Not Supported 00:35:43.657 Predictable Latency Mode: Not Supported 00:35:43.657 Traffic Based Keep ALive: Not Supported 00:35:43.657 Namespace Granularity: Not Supported 00:35:43.657 SQ Associations: Not Supported 00:35:43.657 UUID List: Not Supported 00:35:43.657 Multi-Domain Subsystem: Not Supported 00:35:43.657 Fixed Capacity Management: Not Supported 00:35:43.657 Variable Capacity Management: Not Supported 00:35:43.657 Delete Endurance Group: Not Supported 00:35:43.657 Delete NVM Set: Not Supported 00:35:43.657 Extended LBA Formats Supported: Not Supported 00:35:43.657 Flexible Data Placement Supported: Not Supported 00:35:43.657 00:35:43.657 Controller Memory Buffer Support 00:35:43.657 ================================ 00:35:43.657 Supported: No 00:35:43.657 00:35:43.657 Persistent Memory Region Support 00:35:43.657 ================================ 00:35:43.657 Supported: No 00:35:43.657 00:35:43.657 Admin Command Set Attributes 00:35:43.657 ============================ 00:35:43.657 Security Send/Receive: Not Supported 00:35:43.657 Format NVM: Not Supported 00:35:43.657 Firmware Activate/Download: Not Supported 00:35:43.657 Namespace Management: Not Supported 00:35:43.657 Device Self-Test: Not Supported 00:35:43.657 Directives: Not Supported 00:35:43.657 NVMe-MI: Not Supported 00:35:43.657 Virtualization Management: Not Supported 00:35:43.657 Doorbell Buffer Config: Not Supported 00:35:43.657 Get LBA Status Capability: Not Supported 00:35:43.657 Command & Feature Lockdown Capability: Not Supported 00:35:43.657 Abort Command Limit: 1 00:35:43.657 Async Event Request Limit: 1 00:35:43.657 Number of Firmware Slots: N/A 00:35:43.657 Firmware Slot 1 Read-Only: N/A 00:35:43.657 Firmware Activation Without Reset: N/A 00:35:43.657 Multiple Update Detection Support: N/A 00:35:43.657 Firmware Update Granularity: No Information Provided 00:35:43.657 Per-Namespace SMART Log: No 00:35:43.657 Asymmetric Namespace Access Log Page: Not Supported 00:35:43.657 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:43.657 Command Effects Log Page: Not Supported 00:35:43.657 Get Log Page Extended Data: Supported 00:35:43.657 Telemetry Log Pages: Not Supported 00:35:43.657 Persistent Event Log Pages: Not Supported 00:35:43.657 Supported Log Pages Log Page: May Support 00:35:43.657 Commands Supported & Effects Log Page: Not Supported 00:35:43.657 Feature Identifiers & Effects Log Page:May Support 00:35:43.657 NVMe-MI Commands & Effects Log Page: May Support 00:35:43.657 Data Area 4 for Telemetry Log: Not Supported 00:35:43.657 Error Log Page Entries Supported: 1 00:35:43.657 Keep Alive: Not Supported 00:35:43.657 00:35:43.657 NVM Command Set Attributes 00:35:43.657 ========================== 00:35:43.657 Submission Queue Entry Size 00:35:43.657 Max: 1 00:35:43.657 Min: 1 00:35:43.657 Completion Queue Entry Size 00:35:43.657 Max: 1 00:35:43.657 Min: 1 00:35:43.657 Number of Namespaces: 0 00:35:43.657 Compare Command: Not Supported 00:35:43.657 Write Uncorrectable Command: Not Supported 00:35:43.657 Dataset Management Command: Not Supported 00:35:43.657 Write Zeroes Command: Not Supported 00:35:43.657 Set Features Save Field: Not Supported 00:35:43.657 Reservations: Not Supported 00:35:43.657 Timestamp: Not Supported 00:35:43.657 Copy: Not Supported 00:35:43.657 Volatile Write Cache: Not Present 00:35:43.657 Atomic Write Unit (Normal): 1 00:35:43.657 Atomic Write Unit (PFail): 1 00:35:43.657 Atomic Compare & Write Unit: 1 00:35:43.657 Fused Compare & Write: Not Supported 00:35:43.657 Scatter-Gather List 00:35:43.657 SGL Command Set: Supported 00:35:43.657 SGL Keyed: Not Supported 00:35:43.657 SGL Bit Bucket Descriptor: Not Supported 00:35:43.657 SGL Metadata Pointer: Not Supported 00:35:43.657 Oversized SGL: Not Supported 00:35:43.657 SGL Metadata Address: Not Supported 00:35:43.657 SGL Offset: Supported 00:35:43.657 Transport SGL Data Block: Not Supported 00:35:43.657 Replay Protected Memory Block: Not Supported 00:35:43.657 00:35:43.657 Firmware Slot Information 00:35:43.657 ========================= 00:35:43.657 Active slot: 0 00:35:43.657 00:35:43.657 00:35:43.657 Error Log 00:35:43.657 ========= 00:35:43.657 00:35:43.657 Active Namespaces 00:35:43.657 ================= 00:35:43.657 Discovery Log Page 00:35:43.657 ================== 00:35:43.657 Generation Counter: 2 00:35:43.657 Number of Records: 2 00:35:43.657 Record Format: 0 00:35:43.657 00:35:43.657 Discovery Log Entry 0 00:35:43.657 ---------------------- 00:35:43.657 Transport Type: 3 (TCP) 00:35:43.657 Address Family: 1 (IPv4) 00:35:43.657 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:43.657 Entry Flags: 00:35:43.657 Duplicate Returned Information: 0 00:35:43.657 Explicit Persistent Connection Support for Discovery: 0 00:35:43.657 Transport Requirements: 00:35:43.657 Secure Channel: Not Specified 00:35:43.657 Port ID: 1 (0x0001) 00:35:43.657 Controller ID: 65535 (0xffff) 00:35:43.657 Admin Max SQ Size: 32 00:35:43.657 Transport Service Identifier: 4420 00:35:43.657 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:43.657 Transport Address: 10.0.0.1 00:35:43.657 Discovery Log Entry 1 00:35:43.657 ---------------------- 00:35:43.657 Transport Type: 3 (TCP) 00:35:43.657 Address Family: 1 (IPv4) 00:35:43.657 Subsystem Type: 2 (NVM Subsystem) 00:35:43.657 Entry Flags: 00:35:43.657 Duplicate Returned Information: 0 00:35:43.657 Explicit Persistent Connection Support for Discovery: 0 00:35:43.657 Transport Requirements: 00:35:43.657 Secure Channel: Not Specified 00:35:43.657 Port ID: 1 (0x0001) 00:35:43.657 Controller ID: 65535 (0xffff) 00:35:43.657 Admin Max SQ Size: 32 00:35:43.657 Transport Service Identifier: 4420 00:35:43.657 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:43.657 Transport Address: 10.0.0.1 00:35:43.657 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.657 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.657 get_feature(0x01) failed 00:35:43.657 get_feature(0x02) failed 00:35:43.657 get_feature(0x04) failed 00:35:43.657 ===================================================== 00:35:43.657 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.657 ===================================================== 00:35:43.657 Controller Capabilities/Features 00:35:43.657 ================================ 00:35:43.657 Vendor ID: 0000 00:35:43.657 Subsystem Vendor ID: 0000 00:35:43.657 Serial Number: efa0989e9ba63e212b70 00:35:43.657 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:43.657 Firmware Version: 6.7.0-68 00:35:43.657 Recommended Arb Burst: 6 00:35:43.657 IEEE OUI Identifier: 00 00 00 00:35:43.657 Multi-path I/O 00:35:43.657 May have multiple subsystem ports: Yes 00:35:43.657 May have multiple controllers: Yes 00:35:43.657 Associated with SR-IOV VF: No 00:35:43.657 Max Data Transfer Size: Unlimited 00:35:43.657 Max Number of Namespaces: 1024 00:35:43.657 Max Number of I/O Queues: 128 00:35:43.657 NVMe Specification Version (VS): 1.3 00:35:43.657 NVMe Specification Version (Identify): 1.3 00:35:43.657 Maximum Queue Entries: 1024 00:35:43.657 Contiguous Queues Required: No 00:35:43.658 Arbitration Mechanisms Supported 00:35:43.658 Weighted Round Robin: Not Supported 00:35:43.658 Vendor Specific: Not Supported 00:35:43.658 Reset Timeout: 7500 ms 00:35:43.658 Doorbell Stride: 4 bytes 00:35:43.658 NVM Subsystem Reset: Not Supported 00:35:43.658 Command Sets Supported 00:35:43.658 NVM Command Set: Supported 00:35:43.658 Boot Partition: Not Supported 00:35:43.658 Memory Page Size Minimum: 4096 bytes 00:35:43.658 Memory Page Size Maximum: 4096 bytes 00:35:43.658 Persistent Memory Region: Not Supported 00:35:43.658 Optional Asynchronous Events Supported 00:35:43.658 Namespace Attribute Notices: Supported 00:35:43.658 Firmware Activation Notices: Not Supported 00:35:43.658 ANA Change Notices: Supported 00:35:43.658 PLE Aggregate Log Change Notices: Not Supported 00:35:43.658 LBA Status Info Alert Notices: Not Supported 00:35:43.658 EGE Aggregate Log Change Notices: Not Supported 00:35:43.658 Normal NVM Subsystem Shutdown event: Not Supported 00:35:43.658 Zone Descriptor Change Notices: Not Supported 00:35:43.658 Discovery Log Change Notices: Not Supported 00:35:43.658 Controller Attributes 00:35:43.658 128-bit Host Identifier: Supported 00:35:43.658 Non-Operational Permissive Mode: Not Supported 00:35:43.658 NVM Sets: Not Supported 00:35:43.658 Read Recovery Levels: Not Supported 00:35:43.658 Endurance Groups: Not Supported 00:35:43.658 Predictable Latency Mode: Not Supported 00:35:43.658 Traffic Based Keep ALive: Supported 00:35:43.658 Namespace Granularity: Not Supported 00:35:43.658 SQ Associations: Not Supported 00:35:43.658 UUID List: Not Supported 00:35:43.658 Multi-Domain Subsystem: Not Supported 00:35:43.658 Fixed Capacity Management: Not Supported 00:35:43.658 Variable Capacity Management: Not Supported 00:35:43.658 Delete Endurance Group: Not Supported 00:35:43.658 Delete NVM Set: Not Supported 00:35:43.658 Extended LBA Formats Supported: Not Supported 00:35:43.658 Flexible Data Placement Supported: Not Supported 00:35:43.658 00:35:43.658 Controller Memory Buffer Support 00:35:43.658 ================================ 00:35:43.658 Supported: No 00:35:43.658 00:35:43.658 Persistent Memory Region Support 00:35:43.658 ================================ 00:35:43.658 Supported: No 00:35:43.658 00:35:43.658 Admin Command Set Attributes 00:35:43.658 ============================ 00:35:43.658 Security Send/Receive: Not Supported 00:35:43.658 Format NVM: Not Supported 00:35:43.658 Firmware Activate/Download: Not Supported 00:35:43.658 Namespace Management: Not Supported 00:35:43.658 Device Self-Test: Not Supported 00:35:43.658 Directives: Not Supported 00:35:43.658 NVMe-MI: Not Supported 00:35:43.658 Virtualization Management: Not Supported 00:35:43.658 Doorbell Buffer Config: Not Supported 00:35:43.658 Get LBA Status Capability: Not Supported 00:35:43.658 Command & Feature Lockdown Capability: Not Supported 00:35:43.658 Abort Command Limit: 4 00:35:43.658 Async Event Request Limit: 4 00:35:43.658 Number of Firmware Slots: N/A 00:35:43.658 Firmware Slot 1 Read-Only: N/A 00:35:43.658 Firmware Activation Without Reset: N/A 00:35:43.658 Multiple Update Detection Support: N/A 00:35:43.658 Firmware Update Granularity: No Information Provided 00:35:43.658 Per-Namespace SMART Log: Yes 00:35:43.658 Asymmetric Namespace Access Log Page: Supported 00:35:43.658 ANA Transition Time : 10 sec 00:35:43.658 00:35:43.658 Asymmetric Namespace Access Capabilities 00:35:43.658 ANA Optimized State : Supported 00:35:43.658 ANA Non-Optimized State : Supported 00:35:43.658 ANA Inaccessible State : Supported 00:35:43.658 ANA Persistent Loss State : Supported 00:35:43.658 ANA Change State : Supported 00:35:43.658 ANAGRPID is not changed : No 00:35:43.658 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:43.658 00:35:43.658 ANA Group Identifier Maximum : 128 00:35:43.658 Number of ANA Group Identifiers : 128 00:35:43.658 Max Number of Allowed Namespaces : 1024 00:35:43.658 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:43.658 Command Effects Log Page: Supported 00:35:43.658 Get Log Page Extended Data: Supported 00:35:43.658 Telemetry Log Pages: Not Supported 00:35:43.658 Persistent Event Log Pages: Not Supported 00:35:43.658 Supported Log Pages Log Page: May Support 00:35:43.658 Commands Supported & Effects Log Page: Not Supported 00:35:43.658 Feature Identifiers & Effects Log Page:May Support 00:35:43.658 NVMe-MI Commands & Effects Log Page: May Support 00:35:43.658 Data Area 4 for Telemetry Log: Not Supported 00:35:43.658 Error Log Page Entries Supported: 128 00:35:43.658 Keep Alive: Supported 00:35:43.658 Keep Alive Granularity: 1000 ms 00:35:43.658 00:35:43.658 NVM Command Set Attributes 00:35:43.658 ========================== 00:35:43.658 Submission Queue Entry Size 00:35:43.658 Max: 64 00:35:43.658 Min: 64 00:35:43.658 Completion Queue Entry Size 00:35:43.658 Max: 16 00:35:43.658 Min: 16 00:35:43.658 Number of Namespaces: 1024 00:35:43.658 Compare Command: Not Supported 00:35:43.658 Write Uncorrectable Command: Not Supported 00:35:43.658 Dataset Management Command: Supported 00:35:43.658 Write Zeroes Command: Supported 00:35:43.658 Set Features Save Field: Not Supported 00:35:43.658 Reservations: Not Supported 00:35:43.658 Timestamp: Not Supported 00:35:43.658 Copy: Not Supported 00:35:43.658 Volatile Write Cache: Present 00:35:43.658 Atomic Write Unit (Normal): 1 00:35:43.658 Atomic Write Unit (PFail): 1 00:35:43.658 Atomic Compare & Write Unit: 1 00:35:43.658 Fused Compare & Write: Not Supported 00:35:43.658 Scatter-Gather List 00:35:43.658 SGL Command Set: Supported 00:35:43.658 SGL Keyed: Not Supported 00:35:43.658 SGL Bit Bucket Descriptor: Not Supported 00:35:43.658 SGL Metadata Pointer: Not Supported 00:35:43.658 Oversized SGL: Not Supported 00:35:43.658 SGL Metadata Address: Not Supported 00:35:43.658 SGL Offset: Supported 00:35:43.658 Transport SGL Data Block: Not Supported 00:35:43.658 Replay Protected Memory Block: Not Supported 00:35:43.658 00:35:43.658 Firmware Slot Information 00:35:43.658 ========================= 00:35:43.658 Active slot: 0 00:35:43.658 00:35:43.658 Asymmetric Namespace Access 00:35:43.658 =========================== 00:35:43.658 Change Count : 0 00:35:43.658 Number of ANA Group Descriptors : 1 00:35:43.658 ANA Group Descriptor : 0 00:35:43.658 ANA Group ID : 1 00:35:43.658 Number of NSID Values : 1 00:35:43.658 Change Count : 0 00:35:43.658 ANA State : 1 00:35:43.658 Namespace Identifier : 1 00:35:43.658 00:35:43.658 Commands Supported and Effects 00:35:43.658 ============================== 00:35:43.658 Admin Commands 00:35:43.658 -------------- 00:35:43.658 Get Log Page (02h): Supported 00:35:43.658 Identify (06h): Supported 00:35:43.658 Abort (08h): Supported 00:35:43.658 Set Features (09h): Supported 00:35:43.658 Get Features (0Ah): Supported 00:35:43.658 Asynchronous Event Request (0Ch): Supported 00:35:43.658 Keep Alive (18h): Supported 00:35:43.658 I/O Commands 00:35:43.658 ------------ 00:35:43.658 Flush (00h): Supported 00:35:43.658 Write (01h): Supported LBA-Change 00:35:43.658 Read (02h): Supported 00:35:43.658 Write Zeroes (08h): Supported LBA-Change 00:35:43.658 Dataset Management (09h): Supported 00:35:43.658 00:35:43.658 Error Log 00:35:43.658 ========= 00:35:43.658 Entry: 0 00:35:43.658 Error Count: 0x3 00:35:43.658 Submission Queue Id: 0x0 00:35:43.658 Command Id: 0x5 00:35:43.658 Phase Bit: 0 00:35:43.658 Status Code: 0x2 00:35:43.658 Status Code Type: 0x0 00:35:43.658 Do Not Retry: 1 00:35:43.658 Error Location: 0x28 00:35:43.658 LBA: 0x0 00:35:43.658 Namespace: 0x0 00:35:43.658 Vendor Log Page: 0x0 00:35:43.658 ----------- 00:35:43.658 Entry: 1 00:35:43.658 Error Count: 0x2 00:35:43.658 Submission Queue Id: 0x0 00:35:43.658 Command Id: 0x5 00:35:43.658 Phase Bit: 0 00:35:43.658 Status Code: 0x2 00:35:43.658 Status Code Type: 0x0 00:35:43.658 Do Not Retry: 1 00:35:43.658 Error Location: 0x28 00:35:43.658 LBA: 0x0 00:35:43.658 Namespace: 0x0 00:35:43.658 Vendor Log Page: 0x0 00:35:43.658 ----------- 00:35:43.658 Entry: 2 00:35:43.658 Error Count: 0x1 00:35:43.658 Submission Queue Id: 0x0 00:35:43.658 Command Id: 0x4 00:35:43.658 Phase Bit: 0 00:35:43.658 Status Code: 0x2 00:35:43.658 Status Code Type: 0x0 00:35:43.658 Do Not Retry: 1 00:35:43.658 Error Location: 0x28 00:35:43.658 LBA: 0x0 00:35:43.658 Namespace: 0x0 00:35:43.658 Vendor Log Page: 0x0 00:35:43.658 00:35:43.658 Number of Queues 00:35:43.658 ================ 00:35:43.658 Number of I/O Submission Queues: 128 00:35:43.658 Number of I/O Completion Queues: 128 00:35:43.658 00:35:43.658 ZNS Specific Controller Data 00:35:43.658 ============================ 00:35:43.658 Zone Append Size Limit: 0 00:35:43.658 00:35:43.658 00:35:43.658 Active Namespaces 00:35:43.658 ================= 00:35:43.658 get_feature(0x05) failed 00:35:43.658 Namespace ID:1 00:35:43.658 Command Set Identifier: NVM (00h) 00:35:43.658 Deallocate: Supported 00:35:43.659 Deallocated/Unwritten Error: Not Supported 00:35:43.659 Deallocated Read Value: Unknown 00:35:43.659 Deallocate in Write Zeroes: Not Supported 00:35:43.659 Deallocated Guard Field: 0xFFFF 00:35:43.659 Flush: Supported 00:35:43.659 Reservation: Not Supported 00:35:43.659 Namespace Sharing Capabilities: Multiple Controllers 00:35:43.659 Size (in LBAs): 3750748848 (1788GiB) 00:35:43.659 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:43.659 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:43.659 UUID: 7817b577-abfa-4033-8d0a-91b57ef5fdef 00:35:43.659 Thin Provisioning: Not Supported 00:35:43.659 Per-NS Atomic Units: Yes 00:35:43.659 Atomic Write Unit (Normal): 8 00:35:43.659 Atomic Write Unit (PFail): 8 00:35:43.659 Preferred Write Granularity: 8 00:35:43.659 Atomic Compare & Write Unit: 8 00:35:43.659 Atomic Boundary Size (Normal): 0 00:35:43.659 Atomic Boundary Size (PFail): 0 00:35:43.659 Atomic Boundary Offset: 0 00:35:43.659 NGUID/EUI64 Never Reused: No 00:35:43.659 ANA group ID: 1 00:35:43.659 Namespace Write Protected: No 00:35:43.659 Number of LBA Formats: 1 00:35:43.659 Current LBA Format: LBA Format #00 00:35:43.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:43.659 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:43.659 rmmod nvme_tcp 00:35:43.659 rmmod nvme_fabrics 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:43.659 01:54:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.200 01:54:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:46.200 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:46.201 01:54:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:50.408 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:50.408 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:50.408 00:35:50.408 real 0m19.399s 00:35:50.409 user 0m5.187s 00:35:50.409 sys 0m11.281s 00:35:50.409 01:54:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:50.409 01:54:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.409 ************************************ 00:35:50.409 END TEST nvmf_identify_kernel_target 00:35:50.409 ************************************ 00:35:50.409 01:54:16 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:50.409 01:54:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:50.409 01:54:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:50.409 01:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.409 ************************************ 00:35:50.409 START TEST nvmf_auth_host 00:35:50.409 ************************************ 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:50.409 * Looking for test storage... 00:35:50.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:50.409 01:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:58.548 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:58.548 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:58.548 Found net devices under 0000:31:00.0: cvl_0_0 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:58.548 Found net devices under 0000:31:00.1: cvl_0_1 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:58.548 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.549 01:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:58.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:35:58.549 00:35:58.549 --- 10.0.0.2 ping statistics --- 00:35:58.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.549 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:35:58.549 00:35:58.549 --- 10.0.0.1 ping statistics --- 00:35:58.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.549 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=38797 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 38797 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 38797 ']' 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:58.549 01:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6816d7e81153bf34feeb5a0a3eda8c9 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NWU 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6816d7e81153bf34feeb5a0a3eda8c9 0 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6816d7e81153bf34feeb5a0a3eda8c9 0 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6816d7e81153bf34feeb5a0a3eda8c9 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:58.809 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=47d84e1c94b34fef6ec604486452f9038d1c8ca76bfc40b421fbf2dbdb677026 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FIB 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 47d84e1c94b34fef6ec604486452f9038d1c8ca76bfc40b421fbf2dbdb677026 3 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 47d84e1c94b34fef6ec604486452f9038d1c8ca76bfc40b421fbf2dbdb677026 3 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=47d84e1c94b34fef6ec604486452f9038d1c8ca76bfc40b421fbf2dbdb677026 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FIB 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FIB 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.FIB 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=08907205df14a79372e40dccc84d948171f6e39d6cc00cb3 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 08907205df14a79372e40dccc84d948171f6e39d6cc00cb3 0 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 08907205df14a79372e40dccc84d948171f6e39d6cc00cb3 0 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=08907205df14a79372e40dccc84d948171f6e39d6cc00cb3 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BWU 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4eccd00d36f2a277020d9ec6bac687a8814acb3d9b7aaf9d 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.veW 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4eccd00d36f2a277020d9ec6bac687a8814acb3d9b7aaf9d 2 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4eccd00d36f2a277020d9ec6bac687a8814acb3d9b7aaf9d 2 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4eccd00d36f2a277020d9ec6bac687a8814acb3d9b7aaf9d 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.veW 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.veW 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.veW 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4734ece548a6aa19a76cb1c9fd9e3275 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.47z 00:35:59.070 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4734ece548a6aa19a76cb1c9fd9e3275 1 00:35:59.071 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4734ece548a6aa19a76cb1c9fd9e3275 1 00:35:59.071 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.071 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.071 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4734ece548a6aa19a76cb1c9fd9e3275 00:35:59.071 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:59.071 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.47z 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.47z 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.47z 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=645643ac3c2d95454e2889de9edc396c 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.drW 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 645643ac3c2d95454e2889de9edc396c 1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 645643ac3c2d95454e2889de9edc396c 1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=645643ac3c2d95454e2889de9edc396c 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.drW 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.drW 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.drW 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=392c904fca0e212486ccacd509b4a46d8f86e4c79cb349a9 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vUF 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 392c904fca0e212486ccacd509b4a46d8f86e4c79cb349a9 2 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 392c904fca0e212486ccacd509b4a46d8f86e4c79cb349a9 2 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=392c904fca0e212486ccacd509b4a46d8f86e4c79cb349a9 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vUF 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vUF 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vUF 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=103e8772a31354bcd3efd85a23c10e4c 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dks 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 103e8772a31354bcd3efd85a23c10e4c 0 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 103e8772a31354bcd3efd85a23c10e4c 0 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=103e8772a31354bcd3efd85a23c10e4c 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dks 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dks 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Dks 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=18b77b3668f723ddd9705ddcc68ae6cfb85a4191f1ad89bc058bf4b8b321a86d 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eqc 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 18b77b3668f723ddd9705ddcc68ae6cfb85a4191f1ad89bc058bf4b8b321a86d 3 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 18b77b3668f723ddd9705ddcc68ae6cfb85a4191f1ad89bc058bf4b8b321a86d 3 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=18b77b3668f723ddd9705ddcc68ae6cfb85a4191f1ad89bc058bf4b8b321a86d 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:59.332 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eqc 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eqc 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eqc 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 38797 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 38797 ']' 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NWU 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.FIB ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FIB 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BWU 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.veW ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.veW 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.47z 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.drW ]] 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.drW 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.594 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vUF 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Dks ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Dks 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eqc 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:59.855 01:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:59.855 01:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:59.855 01:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:04.062 Waiting for block devices as requested 00:36:04.062 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:04.062 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:04.323 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:04.323 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:04.323 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:04.584 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:04.584 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:04.584 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:04.847 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:04.847 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:05.419 No valid GPT data, bailing 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:05.419 00:36:05.419 Discovery Log Number of Records 2, Generation counter 2 00:36:05.419 =====Discovery Log Entry 0====== 00:36:05.419 trtype: tcp 00:36:05.419 adrfam: ipv4 00:36:05.419 subtype: current discovery subsystem 00:36:05.419 treq: not specified, sq flow control disable supported 00:36:05.419 portid: 1 00:36:05.419 trsvcid: 4420 00:36:05.419 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:05.419 traddr: 10.0.0.1 00:36:05.419 eflags: none 00:36:05.419 sectype: none 00:36:05.419 =====Discovery Log Entry 1====== 00:36:05.419 trtype: tcp 00:36:05.419 adrfam: ipv4 00:36:05.419 subtype: nvme subsystem 00:36:05.419 treq: not specified, sq flow control disable supported 00:36:05.419 portid: 1 00:36:05.419 trsvcid: 4420 00:36:05.419 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:05.419 traddr: 10.0.0.1 00:36:05.419 eflags: none 00:36:05.419 sectype: none 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.419 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.420 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.681 nvme0n1 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.681 01:54:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.942 nvme0n1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.942 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.203 nvme0n1 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.203 nvme0n1 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.203 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.463 nvme0n1 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.463 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.464 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.724 nvme0n1 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.724 01:54:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.724 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.985 nvme0n1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.985 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.246 nvme0n1 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.246 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.507 nvme0n1 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.507 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.768 nvme0n1 00:36:07.768 01:54:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.768 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.768 01:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.768 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.028 nvme0n1 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.028 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.288 nvme0n1 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.288 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.547 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.807 nvme0n1 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.807 01:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.807 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.067 nvme0n1 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.067 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.327 nvme0n1 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.327 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.588 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.848 nvme0n1 00:36:09.848 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.848 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.848 01:54:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.848 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.848 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.848 01:54:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.848 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.418 nvme0n1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.418 01:54:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.678 nvme0n1 00:36:10.678 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.678 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.678 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.678 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.678 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.938 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.939 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 nvme0n1 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.509 01:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.770 nvme0n1 00:36:11.770 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.770 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.770 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.770 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.770 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.770 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.030 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.289 nvme0n1 00:36:12.289 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.289 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.289 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.289 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.289 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.289 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.550 01:54:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.120 nvme0n1 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.120 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.381 01:54:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.951 nvme0n1 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.951 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.212 01:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.834 nvme0n1 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.834 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.835 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 nvme0n1 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.820 01:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.393 nvme0n1 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.393 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.656 nvme0n1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.656 01:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.916 nvme0n1 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.916 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.177 nvme0n1 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.177 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 nvme0n1 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 nvme0n1 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.701 01:54:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.701 nvme0n1 00:36:17.701 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.701 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.701 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.701 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.701 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.701 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.962 nvme0n1 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.962 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.221 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.222 nvme0n1 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.222 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.482 nvme0n1 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.482 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.744 01:54:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.744 nvme0n1 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.744 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.005 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.267 nvme0n1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.267 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.529 nvme0n1 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.529 01:54:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.790 nvme0n1 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.790 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.051 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.312 nvme0n1 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.312 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.574 nvme0n1 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.574 01:54:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.144 nvme0n1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.144 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.715 nvme0n1 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.715 01:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.285 nvme0n1 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.285 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.545 nvme0n1 00:36:22.545 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.545 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.545 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.545 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.545 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.804 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.805 01:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.065 nvme0n1 00:36:23.065 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.065 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.065 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.065 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.065 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.325 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.326 01:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.896 nvme0n1 00:36:23.896 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.896 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.896 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.896 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.896 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.896 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.156 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.157 01:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.729 nvme0n1 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.729 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.990 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.560 nvme0n1 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.560 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.821 01:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.391 nvme0n1 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:26.391 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.392 01:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.332 nvme0n1 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.332 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.592 nvme0n1 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.592 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 nvme0n1 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.854 01:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.854 nvme0n1 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.854 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:27.855 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.855 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.116 nvme0n1 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.116 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.377 nvme0n1 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.377 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.378 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.378 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.378 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.378 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.378 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.378 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.639 nvme0n1 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.639 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.640 01:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.902 nvme0n1 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.902 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.164 nvme0n1 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.164 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.426 nvme0n1 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.426 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.688 nvme0n1 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.688 01:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.950 nvme0n1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.950 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.211 nvme0n1 00:36:30.211 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.211 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.211 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.211 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.211 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.211 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.473 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.734 nvme0n1 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:30.734 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.735 01:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.994 nvme0n1 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.994 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.254 nvme0n1 00:36:31.254 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.254 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.254 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.254 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.254 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.254 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.516 01:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.777 nvme0n1 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.777 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.038 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.299 nvme0n1 00:36:32.299 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.299 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.299 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.299 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.299 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.299 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.561 01:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.822 nvme0n1 00:36:32.822 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.822 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.822 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.822 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.822 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.822 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:33.083 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.084 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 nvme0n1 00:36:33.345 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.345 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.345 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.345 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.345 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.607 01:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.868 nvme0n1 00:36:33.868 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.868 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.868 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.868 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.868 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.868 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjY4MTZkN2U4MTE1M2JmMzRmZWViNWEwYTNlZGE4Yzk1sW0K: 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: ]] 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDdkODRlMWM5NGIzNGZlZjZlYzYwNDQ4NjQ1MmY5MDM4ZDFjOGNhNzZiZmM0MGI0MjFmYmYyZGJkYjY3NzAyNivtwwM=: 00:36:34.130 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.131 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.704 nvme0n1 00:36:34.704 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.704 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.704 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.704 01:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.704 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.704 01:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.704 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.705 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.705 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.648 nvme0n1 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDczNGVjZTU0OGE2YWExOWE3NmNiMWM5ZmQ5ZTMyNzWoWPmW: 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQ1NjQzYWMzYzJkOTU0NTRlMjg4OWRlOWVkYzM5NmPIbBXs: 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.648 01:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.593 nvme0n1 00:36:36.593 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.593 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.593 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.593 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzkyYzkwNGZjYTBlMjEyNDg2Y2NhY2Q1MDliNGE0NmQ4Zjg2ZTRjNzljYjM0OWE5JZBLDQ==: 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTAzZTg3NzJhMzEzNTRiY2QzZWZkODVhMjNjMTBlNGNzkTD3: 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.594 01:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.164 nvme0n1 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThiNzdiMzY2OGY3MjNkZGQ5NzA1ZGRjYzY4YWU2Y2ZiODVhNDE5MWYxYWQ4OWJjMDU4YmY0YjhiMzIxYTg2ZOD/bI4=: 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:37.164 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.165 01:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.108 nvme0n1 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg5MDcyMDVkZjE0YTc5MzcyZTQwZGNjYzg0ZDk0ODE3MWY2ZTM5ZDZjYzAwY2IzofG6gQ==: 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: ]] 00:36:38.108 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVjY2QwMGQzNmYyYTI3NzAyMGQ5ZWM2YmFjNjg3YTg4MTRhY2IzZDliN2FhZjlkDc6gEQ==: 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.109 request: 00:36:38.109 { 00:36:38.109 "name": "nvme0", 00:36:38.109 "trtype": "tcp", 00:36:38.109 "traddr": "10.0.0.1", 00:36:38.109 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:38.109 "adrfam": "ipv4", 00:36:38.109 "trsvcid": "4420", 00:36:38.109 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:38.109 "method": "bdev_nvme_attach_controller", 00:36:38.109 "req_id": 1 00:36:38.109 } 00:36:38.109 Got JSON-RPC error response 00:36:38.109 response: 00:36:38.109 { 00:36:38.109 "code": -5, 00:36:38.109 "message": "Input/output error" 00:36:38.109 } 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.109 request: 00:36:38.109 { 00:36:38.109 "name": "nvme0", 00:36:38.109 "trtype": "tcp", 00:36:38.109 "traddr": "10.0.0.1", 00:36:38.109 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:38.109 "adrfam": "ipv4", 00:36:38.109 "trsvcid": "4420", 00:36:38.109 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:38.109 "dhchap_key": "key2", 00:36:38.109 "method": "bdev_nvme_attach_controller", 00:36:38.109 "req_id": 1 00:36:38.109 } 00:36:38.109 Got JSON-RPC error response 00:36:38.109 response: 00:36:38.109 { 00:36:38.109 "code": -5, 00:36:38.109 "message": "Input/output error" 00:36:38.109 } 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.109 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.371 request: 00:36:38.371 { 00:36:38.371 "name": "nvme0", 00:36:38.371 "trtype": "tcp", 00:36:38.371 "traddr": "10.0.0.1", 00:36:38.371 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:38.371 "adrfam": "ipv4", 00:36:38.371 "trsvcid": "4420", 00:36:38.371 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:38.371 "dhchap_key": "key1", 00:36:38.371 "dhchap_ctrlr_key": "ckey2", 00:36:38.371 "method": "bdev_nvme_attach_controller", 00:36:38.371 "req_id": 1 00:36:38.371 } 00:36:38.371 Got JSON-RPC error response 00:36:38.371 response: 00:36:38.371 { 00:36:38.371 "code": -5, 00:36:38.371 "message": "Input/output error" 00:36:38.371 } 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:38.371 rmmod nvme_tcp 00:36:38.371 rmmod nvme_fabrics 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 38797 ']' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 38797 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 38797 ']' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 38797 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 38797 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 38797' 00:36:38.371 killing process with pid 38797 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 38797 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 38797 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:38.371 01:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:40.918 01:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:44.219 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:44.219 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:44.219 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:44.219 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:44.219 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:44.219 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:44.480 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:44.481 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:44.481 01:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NWU /tmp/spdk.key-null.BWU /tmp/spdk.key-sha256.47z /tmp/spdk.key-sha384.vUF /tmp/spdk.key-sha512.eqc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:44.481 01:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:48.787 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:48.787 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:48.787 00:36:48.787 real 0m58.326s 00:36:48.787 user 0m51.850s 00:36:48.787 sys 0m15.587s 00:36:48.787 01:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:48.787 01:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.787 ************************************ 00:36:48.787 END TEST nvmf_auth_host 00:36:48.787 ************************************ 00:36:48.787 01:55:14 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:36:48.787 01:55:14 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:48.787 01:55:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:48.788 01:55:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:48.788 01:55:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:48.788 ************************************ 00:36:48.788 START TEST nvmf_digest 00:36:48.788 ************************************ 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:48.788 * Looking for test storage... 00:36:48.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:48.788 01:55:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:56.927 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:56.927 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:56.927 Found net devices under 0000:31:00.0: cvl_0_0 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:56.927 Found net devices under 0000:31:00.1: cvl_0_1 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:56.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:36:56.927 00:36:56.927 --- 10.0.0.2 ping statistics --- 00:36:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.927 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:36:56.927 00:36:56.927 --- 10.0.0.1 ping statistics --- 00:36:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.927 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:56.927 ************************************ 00:36:56.927 START TEST nvmf_digest_clean 00:36:56.927 ************************************ 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=55921 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 55921 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 55921 ']' 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:56.927 01:55:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:56.927 [2024-07-12 01:55:22.670191] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:56.927 [2024-07-12 01:55:22.670244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.927 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.927 [2024-07-12 01:55:22.741268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.927 [2024-07-12 01:55:22.771123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:56.927 [2024-07-12 01:55:22.771159] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:56.927 [2024-07-12 01:55:22.771167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:56.927 [2024-07-12 01:55:22.771174] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:56.927 [2024-07-12 01:55:22.771179] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:56.927 [2024-07-12 01:55:22.771197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.188 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:57.188 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:36:57.188 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:57.188 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.188 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:57.189 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.189 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:57.189 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:57.189 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:57.189 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.189 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:57.189 null0 00:36:57.189 [2024-07-12 01:55:23.527391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.449 [2024-07-12 01:55:23.551572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=55958 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 55958 /var/tmp/bperf.sock 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 55958 ']' 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:57.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:57.449 01:55:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:57.449 [2024-07-12 01:55:23.604303] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:57.449 [2024-07-12 01:55:23.604350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55958 ] 00:36:57.449 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.449 [2024-07-12 01:55:23.687833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.449 [2024-07-12 01:55:23.719007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.019 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:58.019 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:36:58.019 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:58.019 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:58.019 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:58.280 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:58.280 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:58.851 nvme0n1 00:36:58.851 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:58.851 01:55:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:58.851 Running I/O for 2 seconds... 00:37:00.763 00:37:00.763 Latency(us) 00:37:00.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.763 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:00.763 nvme0n1 : 2.00 20234.63 79.04 0.00 0.00 6319.24 2798.93 23483.73 00:37:00.763 =================================================================================================================== 00:37:00.763 Total : 20234.63 79.04 0.00 0.00 6319.24 2798.93 23483.73 00:37:00.763 0 00:37:00.763 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:00.763 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:00.763 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:00.763 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:00.763 | select(.opcode=="crc32c") 00:37:00.763 | "\(.module_name) \(.executed)"' 00:37:00.763 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 55958 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 55958 ']' 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 55958 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55958 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:01.023 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:01.024 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55958' 00:37:01.024 killing process with pid 55958 00:37:01.024 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 55958 00:37:01.024 Received shutdown signal, test time was about 2.000000 seconds 00:37:01.024 00:37:01.024 Latency(us) 00:37:01.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.024 =================================================================================================================== 00:37:01.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:01.024 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 55958 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=56724 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 56724 /var/tmp/bperf.sock 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 56724 ']' 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:01.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:01.284 01:55:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.284 [2024-07-12 01:55:27.435106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:01.284 [2024-07-12 01:55:27.435169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56724 ] 00:37:01.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:01.284 Zero copy mechanism will not be used. 00:37:01.284 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.284 [2024-07-12 01:55:27.517300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.284 [2024-07-12 01:55:27.548207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.855 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:01.855 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:37:01.855 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:01.855 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:01.855 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:02.114 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:02.114 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:02.685 nvme0n1 00:37:02.685 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:02.685 01:55:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:02.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:02.685 Zero copy mechanism will not be used. 00:37:02.685 Running I/O for 2 seconds... 00:37:04.596 00:37:04.596 Latency(us) 00:37:04.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.596 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:04.596 nvme0n1 : 2.00 3130.61 391.33 0.00 0.00 5106.19 1037.65 8628.91 00:37:04.596 =================================================================================================================== 00:37:04.596 Total : 3130.61 391.33 0.00 0.00 5106.19 1037.65 8628.91 00:37:04.596 0 00:37:04.596 01:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:04.596 01:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:04.596 01:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:04.596 01:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:04.596 | select(.opcode=="crc32c") 00:37:04.596 | "\(.module_name) \(.executed)"' 00:37:04.596 01:55:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 56724 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 56724 ']' 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 56724 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 56724 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56724' 00:37:04.855 killing process with pid 56724 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 56724 00:37:04.855 Received shutdown signal, test time was about 2.000000 seconds 00:37:04.855 00:37:04.855 Latency(us) 00:37:04.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.855 =================================================================================================================== 00:37:04.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 56724 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=57487 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 57487 /var/tmp/bperf.sock 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 57487 ']' 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:04.855 01:55:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:05.114 [2024-07-12 01:55:31.251616] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:05.114 [2024-07-12 01:55:31.251675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57487 ] 00:37:05.114 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.114 [2024-07-12 01:55:31.331195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.114 [2024-07-12 01:55:31.359433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.683 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:05.683 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:37:05.683 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:05.683 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:05.683 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:05.943 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.943 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:06.202 nvme0n1 00:37:06.202 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:06.202 01:55:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:06.202 Running I/O for 2 seconds... 00:37:08.740 00:37:08.740 Latency(us) 00:37:08.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.740 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.740 nvme0n1 : 2.00 21947.20 85.73 0.00 0.00 5825.57 2225.49 11960.32 00:37:08.740 =================================================================================================================== 00:37:08.740 Total : 21947.20 85.73 0.00 0.00 5825.57 2225.49 11960.32 00:37:08.740 0 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:08.740 | select(.opcode=="crc32c") 00:37:08.740 | "\(.module_name) \(.executed)"' 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 57487 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 57487 ']' 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 57487 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 57487 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57487' 00:37:08.740 killing process with pid 57487 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 57487 00:37:08.740 Received shutdown signal, test time was about 2.000000 seconds 00:37:08.740 00:37:08.740 Latency(us) 00:37:08.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.740 =================================================================================================================== 00:37:08.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 57487 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=58204 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 58204 /var/tmp/bperf.sock 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 58204 ']' 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:08.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:08.740 01:55:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:08.740 [2024-07-12 01:55:34.935265] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:08.740 [2024-07-12 01:55:34.935323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58204 ] 00:37:08.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:08.740 Zero copy mechanism will not be used. 00:37:08.740 EAL: No free 2048 kB hugepages reported on node 1 00:37:08.740 [2024-07-12 01:55:35.014871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.740 [2024-07-12 01:55:35.043299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.336 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:09.596 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:37:09.596 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:09.596 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:09.596 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:09.596 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:09.596 01:55:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:10.163 nvme0n1 00:37:10.163 01:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:10.163 01:55:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:10.163 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:10.163 Zero copy mechanism will not be used. 00:37:10.163 Running I/O for 2 seconds... 00:37:12.070 00:37:12.070 Latency(us) 00:37:12.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.070 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:12.070 nvme0n1 : 2.00 3580.38 447.55 0.00 0.00 4462.28 2020.69 11851.09 00:37:12.070 =================================================================================================================== 00:37:12.070 Total : 3580.38 447.55 0.00 0.00 4462.28 2020.69 11851.09 00:37:12.070 0 00:37:12.070 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:12.070 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:12.070 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:12.070 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:12.070 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:12.070 | select(.opcode=="crc32c") 00:37:12.070 | "\(.module_name) \(.executed)"' 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 58204 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 58204 ']' 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 58204 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58204 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58204' 00:37:12.331 killing process with pid 58204 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 58204 00:37:12.331 Received shutdown signal, test time was about 2.000000 seconds 00:37:12.331 00:37:12.331 Latency(us) 00:37:12.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.331 =================================================================================================================== 00:37:12.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:12.331 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 58204 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 55921 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 55921 ']' 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 55921 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55921 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55921' 00:37:12.590 killing process with pid 55921 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 55921 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 55921 00:37:12.590 00:37:12.590 real 0m16.268s 00:37:12.590 user 0m31.900s 00:37:12.590 sys 0m3.298s 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:12.590 ************************************ 00:37:12.590 END TEST nvmf_digest_clean 00:37:12.590 ************************************ 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:12.590 01:55:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.590 ************************************ 00:37:12.590 START TEST nvmf_digest_error 00:37:12.590 ************************************ 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=59032 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 59032 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 59032 ']' 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:12.850 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.851 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:12.851 01:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:12.851 [2024-07-12 01:55:39.003921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:12.851 [2024-07-12 01:55:39.003967] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.851 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.851 [2024-07-12 01:55:39.074844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.851 [2024-07-12 01:55:39.105238] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.851 [2024-07-12 01:55:39.105276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.851 [2024-07-12 01:55:39.105284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.851 [2024-07-12 01:55:39.105290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.851 [2024-07-12 01:55:39.105296] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.851 [2024-07-12 01:55:39.105313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.421 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:13.421 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:13.421 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:13.421 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:13.421 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:13.683 [2024-07-12 01:55:39.803362] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:13.683 null0 00:37:13.683 [2024-07-12 01:55:39.877709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.683 [2024-07-12 01:55:39.901895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=59108 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 59108 /var/tmp/bperf.sock 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 59108 ']' 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:13.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:13.683 01:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:13.683 [2024-07-12 01:55:39.953693] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:13.683 [2024-07-12 01:55:39.953740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:37:13.683 EAL: No free 2048 kB hugepages reported on node 1 00:37:13.683 [2024-07-12 01:55:40.035822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.949 [2024-07-12 01:55:40.066297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.520 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:14.520 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:14.520 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:14.520 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:14.780 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:14.780 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.780 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:14.780 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.780 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:14.781 01:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:15.041 nvme0n1 00:37:15.041 01:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:15.041 01:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.041 01:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:15.041 01:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.041 01:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:15.041 01:55:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:15.041 Running I/O for 2 seconds... 00:37:15.041 [2024-07-12 01:55:41.287174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.287204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.287212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.300756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.300776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.300782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.313753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.313771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.313777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.327048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.327066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.327072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.337787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.337804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.337811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.350619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.350637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.350644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.364140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.364158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.364168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.375958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.375974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.375981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.041 [2024-07-12 01:55:41.387034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.041 [2024-07-12 01:55:41.387051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.041 [2024-07-12 01:55:41.387057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.400077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.304 [2024-07-12 01:55:41.400094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.304 [2024-07-12 01:55:41.400101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.412653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.304 [2024-07-12 01:55:41.412669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.304 [2024-07-12 01:55:41.412676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.425992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.304 [2024-07-12 01:55:41.426009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.304 [2024-07-12 01:55:41.426015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.438416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.304 [2024-07-12 01:55:41.438433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.304 [2024-07-12 01:55:41.438439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.449901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.304 [2024-07-12 01:55:41.449917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.304 [2024-07-12 01:55:41.449924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.461543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.304 [2024-07-12 01:55:41.461561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.304 [2024-07-12 01:55:41.461567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.304 [2024-07-12 01:55:41.475632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.475653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.475660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.487299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.487315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.487322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.499864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.499887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.510817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.510834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.510841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.523691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.523708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.523714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.535822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.535838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.535845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.547793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.547810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.547816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.561323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.561340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.561346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.574845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.574863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.574869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.586911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.586928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.586935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.600323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.600340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.600347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.613420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.613437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.613444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.624333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.624356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.635857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.635874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.635880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.305 [2024-07-12 01:55:41.649444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.305 [2024-07-12 01:55:41.649460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.305 [2024-07-12 01:55:41.649466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.566 [2024-07-12 01:55:41.661210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.566 [2024-07-12 01:55:41.661228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.566 [2024-07-12 01:55:41.661238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.566 [2024-07-12 01:55:41.673920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.566 [2024-07-12 01:55:41.673938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.566 [2024-07-12 01:55:41.673944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.566 [2024-07-12 01:55:41.686589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.566 [2024-07-12 01:55:41.686608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.566 [2024-07-12 01:55:41.686615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.566 [2024-07-12 01:55:41.699766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.566 [2024-07-12 01:55:41.699783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.699790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.712162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.712178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.712185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.724855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.724872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.724878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.736624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.736642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.736648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.748037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.748054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.748060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.760680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.760697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.760703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.773849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.773866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.773872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.787693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.787710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.787716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.800588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.800611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.813011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.813028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.813034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.825693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.825710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.825716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.836995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.837012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.849298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.849315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.849321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.861475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.861492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.861498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.875668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.875684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.875691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.888196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.888212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.888219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.900326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.900342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.900352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.567 [2024-07-12 01:55:41.912567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.567 [2024-07-12 01:55:41.912583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.567 [2024-07-12 01:55:41.912589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.924034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.924051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.924058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.936132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.936149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.936155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.949352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.949369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.949376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.961375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.961391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.961398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.973985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.974001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.974007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.987337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.987354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.987360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:41.998955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:41.998972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:41.998978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:42.012811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:42.012831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:42.012838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:42.024519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:42.024536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:42.024542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:42.036597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.828 [2024-07-12 01:55:42.036614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.828 [2024-07-12 01:55:42.036620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.828 [2024-07-12 01:55:42.050594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.050611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.050617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.062735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.062752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.062758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.074815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.074832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.074838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.085783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.085800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.085807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.099116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.099134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.099140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.110898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.110916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.110922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.122445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.122462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.135522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.135539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.135546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.148750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.148767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.148773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.160770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.160787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.160794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.829 [2024-07-12 01:55:42.172176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:15.829 [2024-07-12 01:55:42.172193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.829 [2024-07-12 01:55:42.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.185179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.185196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.185203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.197694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.197711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.197718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.211054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.211071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.211078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.223425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.223442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.223452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.234742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.234759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.234765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.248105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.248122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.248128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.259488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.259506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.259512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.273316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.273333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.273339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.285514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.285531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.285537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.297932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.297949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.297955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.311140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.091 [2024-07-12 01:55:42.311157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.091 [2024-07-12 01:55:42.311163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.091 [2024-07-12 01:55:42.324268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.324286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.324292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.334119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.334135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.334142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.347781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.347805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.358511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.358528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.358534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.371051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.371068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.371074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.383869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.383885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.383892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.397345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.397362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.397368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.409118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.409135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.409141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.420509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.420525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.420532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.432464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.432481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.432491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.092 [2024-07-12 01:55:42.445818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.092 [2024-07-12 01:55:42.445835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.092 [2024-07-12 01:55:42.445842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.457375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.457392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.457399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.469138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.469154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.469161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.482010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.482028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.482034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.494922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.494939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.494946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.508411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.508428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.508435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.521454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.521471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.521477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.532918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.532940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.544352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.544372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.544379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.558722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.558739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.558745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.569412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.569429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.569435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.582566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.582582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.582589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.596011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.596028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.596034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.608068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.608085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.354 [2024-07-12 01:55:42.608091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.354 [2024-07-12 01:55:42.619099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.354 [2024-07-12 01:55:42.619115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.619121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.632792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.632809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.632815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.643523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.643539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.643546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.655839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.655855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.655861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.669017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.669034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.669041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.682445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.682468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.694712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.694727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.355 [2024-07-12 01:55:42.706112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.355 [2024-07-12 01:55:42.706129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.355 [2024-07-12 01:55:42.706135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.718167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.718184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.718190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.730418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.730434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.730441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.744638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.744655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.744661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.757354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.757370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.757380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.770787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.770804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.770810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.781487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.781503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.781509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.795192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.795209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.617 [2024-07-12 01:55:42.795215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.617 [2024-07-12 01:55:42.806946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.617 [2024-07-12 01:55:42.806963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.806969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.819908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.819925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.819931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.833318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.833335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.833341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.844263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.844279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.844286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.855900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.855917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.855923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.869240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.869257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.869263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.883091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.883107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.883114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.896378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.896395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.896401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.906356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.906372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.906378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.920012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.920028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.920034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.931697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.931713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.931719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.944996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.945013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.945019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.957969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.957986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.957992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.618 [2024-07-12 01:55:42.971505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.618 [2024-07-12 01:55:42.971522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.618 [2024-07-12 01:55:42.971532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.887 [2024-07-12 01:55:42.983971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.887 [2024-07-12 01:55:42.983988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.887 [2024-07-12 01:55:42.983994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.887 [2024-07-12 01:55:42.994954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.887 [2024-07-12 01:55:42.994971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.887 [2024-07-12 01:55:42.994977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.887 [2024-07-12 01:55:43.008284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.887 [2024-07-12 01:55:43.008302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.887 [2024-07-12 01:55:43.008308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.021093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.021110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.021116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.031583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.031600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.031606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.044889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.044906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.044912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.058449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.058466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.058472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.071405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.071422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.071428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.084297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.084317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.084323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.094514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.094531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.094537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.106713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.106729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.106735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.120430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.120447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.120453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.131529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.131547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.131553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.145273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.145296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.157510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.157526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.157533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.168741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.168758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.168764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.181142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.181159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.181165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.194026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.194049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.205453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.205470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.205476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.218838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.218855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.888 [2024-07-12 01:55:43.231250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:16.888 [2024-07-12 01:55:43.231267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.888 [2024-07-12 01:55:43.231273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.152 [2024-07-12 01:55:43.243580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:17.152 [2024-07-12 01:55:43.243596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.152 [2024-07-12 01:55:43.243602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.152 [2024-07-12 01:55:43.256545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:17.152 [2024-07-12 01:55:43.256561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.152 [2024-07-12 01:55:43.256567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.152 [2024-07-12 01:55:43.267178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f70830) 00:37:17.152 [2024-07-12 01:55:43.267195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.152 [2024-07-12 01:55:43.267201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.152 00:37:17.152 Latency(us) 00:37:17.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.152 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:17.152 nvme0n1 : 2.00 20461.00 79.93 0.00 0.00 6248.51 2198.19 16056.32 00:37:17.152 =================================================================================================================== 00:37:17.152 Total : 20461.00 79.93 0.00 0.00 6248.51 2198.19 16056.32 00:37:17.152 0 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:17.152 | .driver_specific 00:37:17.152 | .nvme_error 00:37:17.152 | .status_code 00:37:17.152 | .command_transient_transport_error' 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 59108 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 59108 ']' 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 59108 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:17.152 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59108 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59108' 00:37:17.412 killing process with pid 59108 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 59108 00:37:17.412 Received shutdown signal, test time was about 2.000000 seconds 00:37:17.412 00:37:17.412 Latency(us) 00:37:17.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.412 =================================================================================================================== 00:37:17.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 59108 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=59832 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 59832 /var/tmp/bperf.sock 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 59832 ']' 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:17.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:17.412 01:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:17.412 [2024-07-12 01:55:43.679814] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:17.412 [2024-07-12 01:55:43.679889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:37:17.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:17.412 Zero copy mechanism will not be used. 00:37:17.412 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.412 [2024-07-12 01:55:43.763399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.673 [2024-07-12 01:55:43.791780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.243 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.503 nvme0n1 00:37:18.764 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:18.764 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.764 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.764 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.764 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:18.764 01:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.764 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:18.764 Zero copy mechanism will not be used. 00:37:18.764 Running I/O for 2 seconds... 00:37:18.764 [2024-07-12 01:55:44.965553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:44.965583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:44.965591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:44.977370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:44.977390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:44.977397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:44.990665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:44.990683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:44.990694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:45.004049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:45.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:45.004073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:45.014089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:45.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:45.014113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:45.024195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:45.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:45.024219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:45.034600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.764 [2024-07-12 01:55:45.034617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.764 [2024-07-12 01:55:45.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.764 [2024-07-12 01:55:45.046265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.046282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.046289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.056539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.056556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.056562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.066471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.066488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.066494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.076988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.077005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.077012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.086967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.086984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.086990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.096561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.096578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.096585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.105965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.105982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.105989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:18.765 [2024-07-12 01:55:45.118036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:18.765 [2024-07-12 01:55:45.118053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.765 [2024-07-12 01:55:45.118059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.129580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.129597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.129604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.139152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.139169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.139175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.150986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.151003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.151010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.161251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.161268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.170736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.170754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.170763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.181145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.181162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.181169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.190833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.190850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.190857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.200618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.200635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.200641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.209325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.209343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.209350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.218570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.218588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.218594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.228200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.228217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.239105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.239123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.239129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.248508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.248526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.248533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.259493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.259514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.259520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.269024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.269042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.269048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.281072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.281089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.281096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.290351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.290368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.290375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.300412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.300428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.300435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.309963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.309979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.309985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.320170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.320186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.320193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.329915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.329932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.329938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.339947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.339963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.339969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.349898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.349915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.349921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.360032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.360048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.360054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.027 [2024-07-12 01:55:45.371793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.027 [2024-07-12 01:55:45.371810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.027 [2024-07-12 01:55:45.371816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.382659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.382676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.382683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.392097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.392115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.392121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.401918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.401936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.401942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.412475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.412492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.412498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.422764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.422782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.422788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.431980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.431997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.432007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.441807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.441825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.441831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.452132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.452149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.452155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.463928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.463945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.463951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.477745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.477762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.477769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.489772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.489789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.489795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.502571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.502588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.502595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.516571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.516588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.516594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.529227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.529249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.529255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.542269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.542287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.542293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.554256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.554273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.554280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.565349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.565366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.565372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.575255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.575273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.575279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.584373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.584390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.584396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.596477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.596494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.596500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.606337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.606354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.606360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.616913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.616930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.626860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.626876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.626888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.289 [2024-07-12 01:55:45.636326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.289 [2024-07-12 01:55:45.636343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.289 [2024-07-12 01:55:45.636350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.645795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.645812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.645818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.654781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.654798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.665615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.665632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.675307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.675324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.675331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.684671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.684688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.684694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.696731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.696749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.696755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.707465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.707482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.707488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.717873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.717894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.717900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.727770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.727793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.737059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.737076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.737082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.746584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.746601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.746607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.757771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.757789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.757795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.768286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.768303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.768309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.778045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.778062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.788133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.788151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.788157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.550 [2024-07-12 01:55:45.798871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.550 [2024-07-12 01:55:45.798888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.550 [2024-07-12 01:55:45.798895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.809991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.810008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.810014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.819543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.819560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.819566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.828683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.828700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.828707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.840156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.840173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.840179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.849902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.849920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.849926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.859379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.859397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.859403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.869809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.869826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.869833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.878804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.878822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.878828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.887707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.887725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.887734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.551 [2024-07-12 01:55:45.898071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.551 [2024-07-12 01:55:45.898088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.551 [2024-07-12 01:55:45.898095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.909204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.909221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.909228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.919099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.919117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.919123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.928086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.928103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.928109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.939532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.939549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.939556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.949663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.949686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.959689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.959706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.959712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.969461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.969479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.969485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.979855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.979876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.979882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.989979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.989996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.990002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:45.999812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:45.999829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:45.999836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.008542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.008559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.008566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.020791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.020809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.030673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.030692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.030698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.041308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.041326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.053046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.053064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.053071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.062662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.062680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.062686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.072083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.072101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.072107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.082838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.082863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.093342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.812 [2024-07-12 01:55:46.093360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.812 [2024-07-12 01:55:46.093366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.812 [2024-07-12 01:55:46.103194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.103212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.103218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.813 [2024-07-12 01:55:46.113190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.113208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.113214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.813 [2024-07-12 01:55:46.124084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.124102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.124108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.813 [2024-07-12 01:55:46.134974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.134992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.134998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.813 [2024-07-12 01:55:46.145611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.145629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.145635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:19.813 [2024-07-12 01:55:46.155434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.155452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.155461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:19.813 [2024-07-12 01:55:46.167024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:19.813 [2024-07-12 01:55:46.167041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.813 [2024-07-12 01:55:46.167048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.178040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.178057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.178064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.188650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.188668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.188674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.197447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.197465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.197472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.206959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.206977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.206983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.216380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.216398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.216404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.226306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.226324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.226330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.235947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.235966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.235972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.246081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.246099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.246105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.256472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.256490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.256496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.265782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.265800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.265806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.276361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.276379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.276385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.286863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.286880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.286886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.296871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.296888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.296895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.306978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.306997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.307003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.317434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.317452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.317458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.327104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.327122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.327131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.336877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.336895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.336901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.348266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.348284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.348290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.358829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.358846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.358853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.368279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.368296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.368302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.377764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.074 [2024-07-12 01:55:46.377782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.074 [2024-07-12 01:55:46.377788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.074 [2024-07-12 01:55:46.386100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.075 [2024-07-12 01:55:46.386116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.075 [2024-07-12 01:55:46.386123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.075 [2024-07-12 01:55:46.396267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.075 [2024-07-12 01:55:46.396284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.075 [2024-07-12 01:55:46.396290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.075 [2024-07-12 01:55:46.407659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.075 [2024-07-12 01:55:46.407676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.075 [2024-07-12 01:55:46.407682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.075 [2024-07-12 01:55:46.417345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.075 [2024-07-12 01:55:46.417366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.075 [2024-07-12 01:55:46.417372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.075 [2024-07-12 01:55:46.427253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.075 [2024-07-12 01:55:46.427271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.075 [2024-07-12 01:55:46.427278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.437400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.437418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.437424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.448200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.448218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.448225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.458778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.458796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.458802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.470328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.470346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.470352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.480508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.480526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.490347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.490365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.490371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.500174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.500192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.500198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.511038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.511056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.511062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.520668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.520686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.520693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.530727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.530745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.530751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.541844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.541862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.541868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.552321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.552338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.552345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.561727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.561745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.561751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.571401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.571420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.571426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.580480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.580498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.580504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.588625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.588643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.588653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.598782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.598800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.598806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.608943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.608960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.608967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.619421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.619445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.629366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.629383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.629390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.639624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.639641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.639647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.649414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.649438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.659259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.659277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.659282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.336 [2024-07-12 01:55:46.668648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.336 [2024-07-12 01:55:46.668666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.336 [2024-07-12 01:55:46.668672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.337 [2024-07-12 01:55:46.679173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.337 [2024-07-12 01:55:46.679194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.337 [2024-07-12 01:55:46.679200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.337 [2024-07-12 01:55:46.690849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.337 [2024-07-12 01:55:46.690866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.337 [2024-07-12 01:55:46.690872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.701062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.701080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.701086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.710203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.710221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.710227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.722051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.722068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.722074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.734280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.734298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.734304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.744387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.744404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.744411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.754467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.754485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.754491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.762992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.763010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.763016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.773389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.773407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.773413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.784126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.784144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.784150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.794346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.794364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.794370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.804476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.804493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.804499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.814025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.814049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.824198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.824216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.824222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.836439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.836457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.836463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.847095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.847113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.847119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.857559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.857577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.857586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.868305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.868322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.868328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.879407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.879425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.879431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.888659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.888677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.888683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.897138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.897155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.905330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.905348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.915397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.915416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.915422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.924336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.598 [2024-07-12 01:55:46.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.598 [2024-07-12 01:55:46.924360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:20.598 [2024-07-12 01:55:46.934190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.599 [2024-07-12 01:55:46.934207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.599 [2024-07-12 01:55:46.934213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.599 [2024-07-12 01:55:46.944576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd00350) 00:37:20.599 [2024-07-12 01:55:46.944600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.599 [2024-07-12 01:55:46.944606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:20.599 00:37:20.599 Latency(us) 00:37:20.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:20.599 nvme0n1 : 2.00 2999.66 374.96 0.00 0.00 5331.10 1426.77 16165.55 00:37:20.599 =================================================================================================================== 00:37:20.599 Total : 2999.66 374.96 0.00 0.00 5331.10 1426.77 16165.55 00:37:20.859 0 00:37:20.859 01:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:20.859 01:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:20.859 01:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:20.859 | .driver_specific 00:37:20.859 | .nvme_error 00:37:20.859 | .status_code 00:37:20.859 | .command_transient_transport_error' 00:37:20.859 01:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 59832 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 59832 ']' 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 59832 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59832 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59832' 00:37:20.859 killing process with pid 59832 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 59832 00:37:20.859 Received shutdown signal, test time was about 2.000000 seconds 00:37:20.859 00:37:20.859 Latency(us) 00:37:20.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.859 =================================================================================================================== 00:37:20.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.859 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 59832 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=60544 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 60544 /var/tmp/bperf.sock 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 60544 ']' 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:21.120 01:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:21.120 [2024-07-12 01:55:47.348073] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:21.120 [2024-07-12 01:55:47.348133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:37:21.120 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.120 [2024-07-12 01:55:47.426448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.120 [2024-07-12 01:55:47.454793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:22.063 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:22.324 nvme0n1 00:37:22.324 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:22.324 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.324 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.324 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.324 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:22.324 01:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:22.324 Running I/O for 2 seconds... 00:37:22.586 [2024-07-12 01:55:48.690289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190e4140 00:37:22.586 [2024-07-12 01:55:48.691876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.586 [2024-07-12 01:55:48.691905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:22.586 [2024-07-12 01:55:48.700616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190df550 00:37:22.586 [2024-07-12 01:55:48.701591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.586 [2024-07-12 01:55:48.701610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.586 [2024-07-12 01:55:48.712404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190e0630 00:37:22.586 [2024-07-12 01:55:48.713372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.586 [2024-07-12 01:55:48.713389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.586 [2024-07-12 01:55:48.724192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190e1710 00:37:22.586 [2024-07-12 01:55:48.725162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.586 [2024-07-12 01:55:48.725179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.586 [2024-07-12 01:55:48.735978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190de8a8 00:37:22.587 [2024-07-12 01:55:48.736956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.736971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.747742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fda78 00:37:22.587 [2024-07-12 01:55:48.748675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.748691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.759516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fef90 00:37:22.587 [2024-07-12 01:55:48.760494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.760510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.771252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fcdd0 00:37:22.587 [2024-07-12 01:55:48.772216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.772234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.783016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fbcf0 00:37:22.587 [2024-07-12 01:55:48.784008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.784025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.794790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fac10 00:37:22.587 [2024-07-12 01:55:48.795769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.795785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.806570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f9b30 00:37:22.587 [2024-07-12 01:55:48.807545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.807560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.818333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f8a50 00:37:22.587 [2024-07-12 01:55:48.819299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.819315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.830104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f7970 00:37:22.587 [2024-07-12 01:55:48.831078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.831094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.841857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f6890 00:37:22.587 [2024-07-12 01:55:48.842831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.842846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.853600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f57b0 00:37:22.587 [2024-07-12 01:55:48.854577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.854592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.865315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f46d0 00:37:22.587 [2024-07-12 01:55:48.866296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.866311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.877064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f35f0 00:37:22.587 [2024-07-12 01:55:48.878033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.878048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.888876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2510 00:37:22.587 [2024-07-12 01:55:48.889852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.889867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.900627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190e01f8 00:37:22.587 [2024-07-12 01:55:48.901606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.901621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.912358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190e12d8 00:37:22.587 [2024-07-12 01:55:48.913339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.913355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.924095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ddc00 00:37:22.587 [2024-07-12 01:55:48.925067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.925082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.587 [2024-07-12 01:55:48.935833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190dece0 00:37:22.587 [2024-07-12 01:55:48.936807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.587 [2024-07-12 01:55:48.936822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:48.947571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fdeb0 00:37:22.849 [2024-07-12 01:55:48.948508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:48.948524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:48.959272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ff3c8 00:37:22.849 [2024-07-12 01:55:48.960243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:48.960259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:48.970996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fc998 00:37:22.849 [2024-07-12 01:55:48.971968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:48.971984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:48.982731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fb8b8 00:37:22.849 [2024-07-12 01:55:48.983701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:48.983716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:48.994478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190fa7d8 00:37:22.849 [2024-07-12 01:55:48.995426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:48.995444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.006168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f96f8 00:37:22.849 [2024-07-12 01:55:49.007137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.007153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.017463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.018409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.018424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.029918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.030884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.030900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.041593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.042514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.042530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.053285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.054241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.054257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.064974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.065930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.065946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.076838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.077801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.077816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.088524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.089501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.089517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.849 [2024-07-12 01:55:49.100225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.849 [2024-07-12 01:55:49.101190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.849 [2024-07-12 01:55:49.101206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.111919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.112882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.112898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.123623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.124590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.124606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.135324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.136282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.136297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.147033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.147995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.148010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.158734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.159658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.159673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.170431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.171390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.171406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.182110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.183071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.183086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:22.850 [2024-07-12 01:55:49.193809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:22.850 [2024-07-12 01:55:49.194770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.850 [2024-07-12 01:55:49.194785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.205522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.206469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.206485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.217226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.218188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.218203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.228911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.229831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.229846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.240589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.241552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.241566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.252296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.253255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.253270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.264046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.265007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.265022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.275730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.276694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.276709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.287425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.288379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.288394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.111 [2024-07-12 01:55:49.299120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.111 [2024-07-12 01:55:49.300082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.111 [2024-07-12 01:55:49.300101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.310824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.311786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.311802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.322524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.323500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.323515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.334253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.335206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.335222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.345962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.346923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.346938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.357653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.358617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.358633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.369326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.370286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.370301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.381033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.381994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.382009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.392717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.393682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.393698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.404437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.405397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.405415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.416115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.417078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.417094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.427824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.428787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.428804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.439510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.440472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.440488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.451219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.452178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.452193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.112 [2024-07-12 01:55:49.462917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.112 [2024-07-12 01:55:49.463887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.112 [2024-07-12 01:55:49.463903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.474608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.475527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.475542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.486285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.487242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.487257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.497957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.498917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.498934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.509654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.510624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.510639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.521369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.522320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.522336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.533054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.534015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.534030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.544751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.545705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.545720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.556440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.557373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.557389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.568138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.569087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.569103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.579820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.580781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.580796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.591534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.592505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.592520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.603235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.604193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.604208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.614924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.615878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.374 [2024-07-12 01:55:49.615893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.374 [2024-07-12 01:55:49.626603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.374 [2024-07-12 01:55:49.627535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.627550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.638312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.639232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.639247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.650012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.650975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.650990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.661701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.662664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.662679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.673375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.674334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.674349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.685035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.685955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.685970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.696714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.697674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.697689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.708415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.709379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.709396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.375 [2024-07-12 01:55:49.720097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.375 [2024-07-12 01:55:49.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.375 [2024-07-12 01:55:49.721076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.731784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.732708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.732723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.743458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.744430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.755164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.756123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.756138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.766835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.767821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.778547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.779492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.779507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.790235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.791194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.791209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.801904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.802852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.802867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.813585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.814513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.814529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.825284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.826241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.826257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.836959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.837934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.837949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.848639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.849603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.849618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.860330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.861287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.861302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.872004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.872970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.872985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.883688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.884647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.884662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.895386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.896346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.896361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.907063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.908029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.908045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.918769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.640 [2024-07-12 01:55:49.919728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.640 [2024-07-12 01:55:49.919743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.640 [2024-07-12 01:55:49.930434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.641 [2024-07-12 01:55:49.931388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.641 [2024-07-12 01:55:49.931403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.641 [2024-07-12 01:55:49.942128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.641 [2024-07-12 01:55:49.943063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.641 [2024-07-12 01:55:49.943078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.641 [2024-07-12 01:55:49.953814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.641 [2024-07-12 01:55:49.954785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.641 [2024-07-12 01:55:49.954801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.641 [2024-07-12 01:55:49.965526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.641 [2024-07-12 01:55:49.966503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.641 [2024-07-12 01:55:49.966518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.641 [2024-07-12 01:55:49.977223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.641 [2024-07-12 01:55:49.978188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.641 [2024-07-12 01:55:49.978203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.641 [2024-07-12 01:55:49.988928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.641 [2024-07-12 01:55:49.989858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.641 [2024-07-12 01:55:49.989874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.000626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.956 [2024-07-12 01:55:50.002058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.002076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.013023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190ef6a8 00:37:23.956 [2024-07-12 01:55:50.013983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.014001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.024831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.025796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.025811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.036562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.037507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.037523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.048302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.049251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.049266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.060020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.060977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.060993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.071747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.072703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.072718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.083467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.084423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.084438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.095182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.096319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.096334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.107079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.108029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.108044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.118789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.119747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.119763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.130492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.131420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.131435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.142197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.143153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.143168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.153900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.154808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.154824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.165607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.166561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.166576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.177290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.178244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.178259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.189019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.189964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.189979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.200709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.201658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.201673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.956 [2024-07-12 01:55:50.212406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.956 [2024-07-12 01:55:50.213354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.956 [2024-07-12 01:55:50.213369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.224122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.225033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.225049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.235832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.236788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.236803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.247515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.248472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.248486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.259205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.260162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.260177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.270891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.271847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.271862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.282606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.283556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.283571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.294319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.295223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.295243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:23.957 [2024-07-12 01:55:50.306022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:23.957 [2024-07-12 01:55:50.306971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:23.957 [2024-07-12 01:55:50.306987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.317716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.318668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.318686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.329432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.330377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.330392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.341120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.342069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.342084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.352828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.353780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.353795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.364522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.365469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.365485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.376211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.377159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.377174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.387906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.388854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.388869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.399613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.400565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.400580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.411319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.412271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.412286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.423030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.423988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.424003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.434723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.435665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.435680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.446418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.447369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.447384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.458096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.459048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.459064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.469806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.470760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.470776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.481514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.482469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.482484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.493220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.494176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.494191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.504894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.505845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.505860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.516596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.517553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.517569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.528301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.529240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.529256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.540001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.540955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.540971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.551734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.552684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.552700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.563429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.564358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.575108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.576059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.586836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.587771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.587786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.598531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.599465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.599480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.610266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.611213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.611228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.268 [2024-07-12 01:55:50.621941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.268 [2024-07-12 01:55:50.622891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.268 [2024-07-12 01:55:50.622909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.529 [2024-07-12 01:55:50.633626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.529 [2024-07-12 01:55:50.634557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.529 [2024-07-12 01:55:50.634572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.529 [2024-07-12 01:55:50.645315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.529 [2024-07-12 01:55:50.646261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.529 [2024-07-12 01:55:50.646277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.529 [2024-07-12 01:55:50.657031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.529 [2024-07-12 01:55:50.657976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.529 [2024-07-12 01:55:50.657991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.529 [2024-07-12 01:55:50.668735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.529 [2024-07-12 01:55:50.669692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.529 [2024-07-12 01:55:50.669707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.529 [2024-07-12 01:55:50.680451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a650) with pdu=0x2000190f2d80 00:37:24.529 [2024-07-12 01:55:50.681396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:24.529 [2024-07-12 01:55:50.681412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:24.529 00:37:24.529 Latency(us) 00:37:24.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.529 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:24.529 nvme0n1 : 2.01 21783.57 85.09 0.00 0.00 5868.12 2867.20 13653.33 00:37:24.529 =================================================================================================================== 00:37:24.529 Total : 21783.57 85.09 0.00 0.00 5868.12 2867.20 13653.33 00:37:24.529 0 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:24.529 | .driver_specific 00:37:24.529 | .nvme_error 00:37:24.529 | .status_code 00:37:24.529 | .command_transient_transport_error' 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 60544 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 60544 ']' 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 60544 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:24.529 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60544 00:37:24.790 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:24.790 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:24.790 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60544' 00:37:24.790 killing process with pid 60544 00:37:24.790 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 60544 00:37:24.790 Received shutdown signal, test time was about 2.000000 seconds 00:37:24.790 00:37:24.790 Latency(us) 00:37:24.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.790 =================================================================================================================== 00:37:24.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:24.790 01:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 60544 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=61290 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 61290 /var/tmp/bperf.sock 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 61290 ']' 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:24.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:24.790 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.790 [2024-07-12 01:55:51.086053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:24.790 [2024-07-12 01:55:51.086123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:37:24.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:24.790 Zero copy mechanism will not be used. 00:37:24.790 EAL: No free 2048 kB hugepages reported on node 1 00:37:25.051 [2024-07-12 01:55:51.170162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.051 [2024-07-12 01:55:51.198481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.622 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:25.883 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.883 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.883 01:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:26.144 nvme0n1 00:37:26.144 01:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:26.144 01:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.144 01:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:26.144 01:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.144 01:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:26.144 01:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.144 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:26.144 Zero copy mechanism will not be used. 00:37:26.144 Running I/O for 2 seconds... 00:37:26.144 [2024-07-12 01:55:52.402151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.402523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.402550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.413527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.413886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.413906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.424632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.424977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.424995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.435074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.435420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.435437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.444840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.445182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.445199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.453489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.453619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.453634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.462816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.463044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.463060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.469791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.470121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.144 [2024-07-12 01:55:52.470138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.144 [2024-07-12 01:55:52.477411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.144 [2024-07-12 01:55:52.477738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.145 [2024-07-12 01:55:52.477755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.145 [2024-07-12 01:55:52.485272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.145 [2024-07-12 01:55:52.485618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.145 [2024-07-12 01:55:52.485635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.145 [2024-07-12 01:55:52.495391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.145 [2024-07-12 01:55:52.495714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.145 [2024-07-12 01:55:52.495730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.506805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.506877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.506893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.517301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.517628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.517645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.527423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.527511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.527526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.537386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.537704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.537721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.548903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.549132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.549148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.558099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.558427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.558444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.567827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.568171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.568188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.577734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.577839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.577855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.588936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.589166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.589182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.600424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.600759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.600777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.612976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.613333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.613354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.623197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.623447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.623464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.633169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.633492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.633509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.643824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.644155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.644172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.653687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.653932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.653948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.663681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.663971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.663988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.674328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.674666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.674682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.686303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.686620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.686635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.698400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.698692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.698709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.711195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.711490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.711507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.723000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.723264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.723279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.733177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.733416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.733433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.743296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.407 [2024-07-12 01:55:52.743696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.407 [2024-07-12 01:55:52.743713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.407 [2024-07-12 01:55:52.753382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.408 [2024-07-12 01:55:52.753775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.408 [2024-07-12 01:55:52.753792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.763575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.763808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.763824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.772227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.772510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.781859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.782271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.782288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.790179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.790595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.790611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.797168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.797528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.797545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.804975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.805329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.811072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.811426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.811442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.817876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.818083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.818098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.825520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.825825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.825841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.832969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.833304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.833321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.838802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.839159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.839175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.847036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.847225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.847245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.852860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.853050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.853068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.861683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.862039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.862055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.870226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.870576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.870592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.879243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.879569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.879585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.890607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.891045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.891061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.901057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.901360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.901377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.909719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.910105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.910121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.918434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.918709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.918726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.927779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.928064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.928080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.937523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.937787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.937803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.947366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.947721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.947738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.958147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.958442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.958459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.967725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.968039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.968055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.978162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.978499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.978516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.988565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.988815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.988832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:52.998985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.669 [2024-07-12 01:55:52.999370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.669 [2024-07-12 01:55:52.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.669 [2024-07-12 01:55:53.006266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.670 [2024-07-12 01:55:53.006659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.670 [2024-07-12 01:55:53.006676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.670 [2024-07-12 01:55:53.016956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.670 [2024-07-12 01:55:53.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.670 [2024-07-12 01:55:53.017246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.026312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.026644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.026660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.037020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.037449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.037465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.047773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.048278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.048295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.059682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.060142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.060159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.070613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.070837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.070853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.082187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.082577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.082593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.093891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.094259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.094275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.104710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.105105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.105122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.115725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.116042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.116062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.127221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.127505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.139076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.139555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.139571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.150348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.150703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.150719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.161803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.162287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.162304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.173032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.173311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.173327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.183412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.183672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.183693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.194379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.194638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.194653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.205287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.205537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.205553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.216250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.216476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.216492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.228458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.228661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.228675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.239914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.240358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.240374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.251138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.251476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.262740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.262894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.262909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:26.930 [2024-07-12 01:55:53.274370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:26.930 [2024-07-12 01:55:53.274601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.930 [2024-07-12 01:55:53.274616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.285859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.286171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.286187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.297337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.297586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.297601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.308196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.308405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.308421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.319196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.319331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.319346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.330026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.330344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.330360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.339813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.340248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.351696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.352156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.352172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.363190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.363509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.363525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.374790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.375115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.375131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.386066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.386200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.386215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.397631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.397820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.397835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.407591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.407735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.407753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.419350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.419704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.419720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.430861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.431064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.431079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.439815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.439998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.440014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.447487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.447660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.447675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.454718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.455000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.455015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.462447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.462710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.462725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.467870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.467996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.468011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.472474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.472601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.472617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.477505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.477686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.477702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.481700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.481864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.481880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.488288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.488454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.495461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.495710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.495725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.504305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.504474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.504489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.512754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.512994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.513011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.521108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.521352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.521368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.531067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.531289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.531304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.192 [2024-07-12 01:55:53.542115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.192 [2024-07-12 01:55:53.542368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.192 [2024-07-12 01:55:53.542386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.552897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.553182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.553198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.562915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.563036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.563050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.572393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.572766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.572782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.582796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.582967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.582983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.592264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.592551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.592567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.598834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.598989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.599005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.604362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.604550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.604565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.611332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.611492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.611507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.619551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.619826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.619845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.628470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.628654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.628669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.636747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.637078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.637093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.645045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.645338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.645353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.653643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.653769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.662149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.662476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.662492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.671042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.671195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.671210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.677286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.677432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.677448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.681665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.681823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.681839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.687054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.687326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.696551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.696668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.696683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.704128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.704433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.704450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.712141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.712303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.712318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.719199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.719324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.719340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.723791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.723906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.723921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.728091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.728209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.728224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.732109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.732240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.736041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.736173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.736190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.740247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.740379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.740394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.474 [2024-07-12 01:55:53.744284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.474 [2024-07-12 01:55:53.744450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.474 [2024-07-12 01:55:53.744465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.749249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.749432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.749448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.759765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.759964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.759979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.770587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.770831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.770845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.780359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.780623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.780638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.790797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.791210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.791226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.800796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.800931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.800946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.812356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.812485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.812500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.475 [2024-07-12 01:55:53.823138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.475 [2024-07-12 01:55:53.823393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.475 [2024-07-12 01:55:53.823409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.835141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.835557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.835573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.847192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.847335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.847351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.857921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.858155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.858171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.868890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.869297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.869313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.880006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.880300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.880316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.891397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.891720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.891736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.899772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.736 [2024-07-12 01:55:53.900116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.736 [2024-07-12 01:55:53.900132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.736 [2024-07-12 01:55:53.909109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.909245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.909261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.916192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.916424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.916439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.922671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.922841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.922856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.932452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.932733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.932748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.937565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.937680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.937696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.944867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.945206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.945222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.954127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.954464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.954480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.963639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.963898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.963915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.973389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.973516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.973534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.981253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.981399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.981414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.987977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.988124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.988140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:53.994330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:53.994448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:53.994463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.000191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.000348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.000364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.004021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.004167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.004182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.008062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.008181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.008197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.012226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.012345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.012360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.016394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.016508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.016523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.024508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.024833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.024849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.033722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.033941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.033957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.041264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.041380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.041395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.050282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.050535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.050556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.058870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.058990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.059005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.067472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.067780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.067796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.075454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.075573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.075588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.737 [2024-07-12 01:55:54.084067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.737 [2024-07-12 01:55:54.084349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.737 [2024-07-12 01:55:54.084366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.998 [2024-07-12 01:55:54.093348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.998 [2024-07-12 01:55:54.093468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.998 [2024-07-12 01:55:54.093483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.998 [2024-07-12 01:55:54.102020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.998 [2024-07-12 01:55:54.102380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.998 [2024-07-12 01:55:54.102396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.998 [2024-07-12 01:55:54.111075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.998 [2024-07-12 01:55:54.111194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.111210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.120209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.120347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.120362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.129392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.129661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.129676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.138011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.138158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.138173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.147421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.147539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.147555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.156150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.156540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.165338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.165473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.165489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.175739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.176034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.176053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.184285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.192873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.193169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.193185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.198846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.199013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.199029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.203655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.203931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.203955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.207630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.207741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.207756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.213463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.213571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.213586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.217338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.217458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.220989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.221092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.221108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.224593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.224700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.228167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.228274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.228289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.232302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.232418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.232434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.236346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.236677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.240221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.240341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.240356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.244122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.244404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.244421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.248000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.248227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.248247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.252242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.252508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.252523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.255842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.255938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.255953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.259350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.259445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.259461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.262869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.262967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.262982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.266361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.266458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.266473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.269866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:27.999 [2024-07-12 01:55:54.269963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.999 [2024-07-12 01:55:54.269978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.999 [2024-07-12 01:55:54.273573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.273672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.273687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.277046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.277143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.277157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.280692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.280790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.280805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.284148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.284250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.284265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.287940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.288138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.288155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.292454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.292810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.292826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.297640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.297749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.297764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.301112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.301209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.301225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.304652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.304748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.304763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.308186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.308286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.308301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.311739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.311835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.311851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.315257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.315356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.315371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.318753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.318853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.318869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.322296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.322399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.322414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.325784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.325883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.325897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.329292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.329392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.329407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.332802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.332901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.332916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.336315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.336416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.336432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.340314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.340475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.340490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.344110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.344211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.344226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.347658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.347804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.347820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.000 [2024-07-12 01:55:54.351263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.000 [2024-07-12 01:55:54.351362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.000 [2024-07-12 01:55:54.351377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.355906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.356014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.356029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.360335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.360435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.360450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.365114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.365211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.365226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.369725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.369823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.369838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.374930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.375029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.375044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.381440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.381694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.381709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.387201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.261 [2024-07-12 01:55:54.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.261 [2024-07-12 01:55:54.387327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.261 [2024-07-12 01:55:54.394929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a920) with pdu=0x2000190fef90 00:37:28.262 [2024-07-12 01:55:54.395032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.262 [2024-07-12 01:55:54.395047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.262 00:37:28.262 Latency(us) 00:37:28.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.262 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:28.262 nvme0n1 : 2.00 3805.59 475.70 0.00 0.00 4196.73 1658.88 12779.52 00:37:28.262 =================================================================================================================== 00:37:28.262 Total : 3805.59 475.70 0.00 0.00 4196.73 1658.88 12779.52 00:37:28.262 0 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:28.262 | .driver_specific 00:37:28.262 | .nvme_error 00:37:28.262 | .status_code 00:37:28.262 | .command_transient_transport_error' 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 246 > 0 )) 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 61290 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 61290 ']' 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 61290 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61290 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61290' 00:37:28.262 killing process with pid 61290 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 61290 00:37:28.262 Received shutdown signal, test time was about 2.000000 seconds 00:37:28.262 00:37:28.262 Latency(us) 00:37:28.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.262 =================================================================================================================== 00:37:28.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.262 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 61290 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 59032 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 59032 ']' 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 59032 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59032 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59032' 00:37:28.522 killing process with pid 59032 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 59032 00:37:28.522 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 59032 00:37:28.783 00:37:28.783 real 0m15.953s 00:37:28.783 user 0m31.240s 00:37:28.783 sys 0m3.319s 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.783 ************************************ 00:37:28.783 END TEST nvmf_digest_error 00:37:28.783 ************************************ 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:28.783 rmmod nvme_tcp 00:37:28.783 rmmod nvme_fabrics 00:37:28.783 rmmod nvme_keyring 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:28.783 01:55:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 59032 ']' 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 59032 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 59032 ']' 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 59032 00:37:28.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (59032) - No such process 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 59032 is not found' 00:37:28.783 Process with pid 59032 is not found 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:28.783 01:55:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.328 01:55:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:31.328 00:37:31.328 real 0m42.443s 00:37:31.328 user 1m5.522s 00:37:31.328 sys 0m12.367s 00:37:31.328 01:55:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:31.328 01:55:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.328 ************************************ 00:37:31.328 END TEST nvmf_digest 00:37:31.328 ************************************ 00:37:31.328 01:55:57 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:37:31.328 01:55:57 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:37:31.328 01:55:57 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:37:31.328 01:55:57 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:31.328 01:55:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:31.328 01:55:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:31.328 01:55:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.328 ************************************ 00:37:31.328 START TEST nvmf_bdevperf 00:37:31.328 ************************************ 00:37:31.328 01:55:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:31.328 * Looking for test storage... 00:37:31.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:31.328 01:55:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.328 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:31.328 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.328 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:37:31.329 01:55:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:37:39.465 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:39.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:39.466 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:39.466 Found net devices under 0000:31:00.0: cvl_0_0 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:39.466 Found net devices under 0000:31:00.1: cvl_0_1 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:39.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:39.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:37:39.466 00:37:39.466 --- 10.0.0.2 ping statistics --- 00:37:39.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.466 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:39.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:39.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:37:39.466 00:37:39.466 --- 10.0.0.1 ping statistics --- 00:37:39.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.466 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=66507 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 66507 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 66507 ']' 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:39.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:39.466 01:56:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.466 [2024-07-12 01:56:05.477538] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:39.466 [2024-07-12 01:56:05.477603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:39.466 EAL: No free 2048 kB hugepages reported on node 1 00:37:39.466 [2024-07-12 01:56:05.574395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:39.466 [2024-07-12 01:56:05.622768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.466 [2024-07-12 01:56:05.622826] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.466 [2024-07-12 01:56:05.622834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.466 [2024-07-12 01:56:05.622841] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.466 [2024-07-12 01:56:05.622847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.466 [2024-07-12 01:56:05.622970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:39.466 [2024-07-12 01:56:05.623133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.466 [2024-07-12 01:56:05.623133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:40.036 [2024-07-12 01:56:06.306918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:40.036 Malloc0 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:40.036 [2024-07-12 01:56:06.372567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:40.036 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:40.037 { 00:37:40.037 "params": { 00:37:40.037 "name": "Nvme$subsystem", 00:37:40.037 "trtype": "$TEST_TRANSPORT", 00:37:40.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:40.037 "adrfam": "ipv4", 00:37:40.037 "trsvcid": "$NVMF_PORT", 00:37:40.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:40.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:40.037 "hdgst": ${hdgst:-false}, 00:37:40.037 "ddgst": ${ddgst:-false} 00:37:40.037 }, 00:37:40.037 "method": "bdev_nvme_attach_controller" 00:37:40.037 } 00:37:40.037 EOF 00:37:40.037 )") 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:40.037 01:56:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:40.037 "params": { 00:37:40.037 "name": "Nvme1", 00:37:40.037 "trtype": "tcp", 00:37:40.037 "traddr": "10.0.0.2", 00:37:40.037 "adrfam": "ipv4", 00:37:40.037 "trsvcid": "4420", 00:37:40.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:40.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:40.037 "hdgst": false, 00:37:40.037 "ddgst": false 00:37:40.037 }, 00:37:40.037 "method": "bdev_nvme_attach_controller" 00:37:40.037 }' 00:37:40.297 [2024-07-12 01:56:06.426702] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:40.297 [2024-07-12 01:56:06.426751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66831 ] 00:37:40.297 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.297 [2024-07-12 01:56:06.491254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.297 [2024-07-12 01:56:06.522104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.558 Running I/O for 1 seconds... 00:37:41.498 00:37:41.498 Latency(us) 00:37:41.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.498 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:41.498 Verification LBA range: start 0x0 length 0x4000 00:37:41.498 Nvme1n1 : 1.01 8948.14 34.95 0.00 0.00 14241.38 3085.65 15728.64 00:37:41.498 =================================================================================================================== 00:37:41.498 Total : 8948.14 34.95 0.00 0.00 14241.38 3085.65 15728.64 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=67162 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:41.759 { 00:37:41.759 "params": { 00:37:41.759 "name": "Nvme$subsystem", 00:37:41.759 "trtype": "$TEST_TRANSPORT", 00:37:41.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:41.759 "adrfam": "ipv4", 00:37:41.759 "trsvcid": "$NVMF_PORT", 00:37:41.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:41.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:41.759 "hdgst": ${hdgst:-false}, 00:37:41.759 "ddgst": ${ddgst:-false} 00:37:41.759 }, 00:37:41.759 "method": "bdev_nvme_attach_controller" 00:37:41.759 } 00:37:41.759 EOF 00:37:41.759 )") 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:41.759 01:56:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:41.759 "params": { 00:37:41.759 "name": "Nvme1", 00:37:41.759 "trtype": "tcp", 00:37:41.759 "traddr": "10.0.0.2", 00:37:41.759 "adrfam": "ipv4", 00:37:41.759 "trsvcid": "4420", 00:37:41.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:41.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:41.759 "hdgst": false, 00:37:41.759 "ddgst": false 00:37:41.759 }, 00:37:41.759 "method": "bdev_nvme_attach_controller" 00:37:41.759 }' 00:37:41.759 [2024-07-12 01:56:07.967389] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:41.759 [2024-07-12 01:56:07.967446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67162 ] 00:37:41.759 EAL: No free 2048 kB hugepages reported on node 1 00:37:41.759 [2024-07-12 01:56:08.033757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.759 [2024-07-12 01:56:08.063094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.020 Running I/O for 15 seconds... 00:37:45.324 01:56:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 66507 00:37:45.324 01:56:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:45.324 [2024-07-12 01:56:10.933190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.324 [2024-07-12 01:56:10.933381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.324 [2024-07-12 01:56:10.933730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.324 [2024-07-12 01:56:10.933739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.325 [2024-07-12 01:56:10.933813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.933985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.933992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.325 [2024-07-12 01:56:10.934521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.325 [2024-07-12 01:56:10.934529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.934986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.934995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.326 [2024-07-12 01:56:10.935225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.326 [2024-07-12 01:56:10.935237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.327 [2024-07-12 01:56:10.935469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1433430 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:10.935486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:45.327 [2024-07-12 01:56:10.935492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:45.327 [2024-07-12 01:56:10.935498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94008 len:8 PRP1 0x0 PRP2 0x0 00:37:45.327 [2024-07-12 01:56:10.935506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:45.327 [2024-07-12 01:56:10.935544] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1433430 was disconnected and freed. reset controller. 00:37:45.327 [2024-07-12 01:56:10.939106] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:10.939153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:10.939983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:10.940000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:10.940008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:10.940235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:10.940459] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.327 [2024-07-12 01:56:10.940467] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.327 [2024-07-12 01:56:10.940476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.327 [2024-07-12 01:56:10.944035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.327 [2024-07-12 01:56:10.953260] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:10.953882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:10.953920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:10.953930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:10.954172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:10.954406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.327 [2024-07-12 01:56:10.954416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.327 [2024-07-12 01:56:10.954424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.327 [2024-07-12 01:56:10.957981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.327 [2024-07-12 01:56:10.967239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:10.967925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:10.967962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:10.967972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:10.968211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:10.968444] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.327 [2024-07-12 01:56:10.968453] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.327 [2024-07-12 01:56:10.968461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.327 [2024-07-12 01:56:10.972010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.327 [2024-07-12 01:56:10.981226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:10.981809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:10.981828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:10.981836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:10.982055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:10.982279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.327 [2024-07-12 01:56:10.982287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.327 [2024-07-12 01:56:10.982295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.327 [2024-07-12 01:56:10.985842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.327 [2024-07-12 01:56:10.995046] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:10.995645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:10.995681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:10.995692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:10.995930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:10.996153] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.327 [2024-07-12 01:56:10.996161] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.327 [2024-07-12 01:56:10.996169] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.327 [2024-07-12 01:56:10.999725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.327 [2024-07-12 01:56:11.009148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:11.009730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:11.009748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:11.009756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:11.009975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:11.010194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.327 [2024-07-12 01:56:11.010202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.327 [2024-07-12 01:56:11.010209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.327 [2024-07-12 01:56:11.013761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.327 [2024-07-12 01:56:11.022967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.327 [2024-07-12 01:56:11.023627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.327 [2024-07-12 01:56:11.023664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.327 [2024-07-12 01:56:11.023674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.327 [2024-07-12 01:56:11.023913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.327 [2024-07-12 01:56:11.024135] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.024144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.024152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.027707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.036912] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.037552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.037572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.037583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.037803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.038022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.038029] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.038036] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.041592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.050792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.051874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.051906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.051916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.052155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.052385] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.052394] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.052401] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.055950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.064743] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.065313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.065331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.065339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.065559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.065779] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.065786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.065793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.069341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.078575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.079131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.079147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.079154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.079384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.079605] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.079617] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.079624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.083170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.092519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.093110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.093126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.093133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.093357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.093577] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.093584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.093591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.097134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.106334] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.107000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.107036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.107047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.107293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.107516] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.107525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.107532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.111082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.120291] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.120902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.120919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.120927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.121146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.121372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.121381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.121387] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.124931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.134136] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.134775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.134812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.134823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.135061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.328 [2024-07-12 01:56:11.135292] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.328 [2024-07-12 01:56:11.135301] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.328 [2024-07-12 01:56:11.135308] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.328 [2024-07-12 01:56:11.138860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.328 [2024-07-12 01:56:11.148067] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.328 [2024-07-12 01:56:11.148554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.328 [2024-07-12 01:56:11.148573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.328 [2024-07-12 01:56:11.148580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.328 [2024-07-12 01:56:11.148799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.149018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.149026] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.149032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.152581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.161986] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.162635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.162672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.162682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.162921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.163143] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.163151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.163159] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.166718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.175932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.176674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.176711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.176722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.176964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.177187] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.177195] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.177202] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.180757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.189755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.190464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.190501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.190511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.190750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.190972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.190980] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.190987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.194545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.203750] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.204495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.204532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.204543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.204781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.205004] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.205012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.205019] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.208578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.217577] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.218271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.218308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.218319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.218561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.218784] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.218793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.218805] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.222365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.231571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.232223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.232267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.232279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.232519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.232741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.232750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.232757] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.236313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.245517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.246013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.246031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.246039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.246264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.246483] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.246491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.246497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.250044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.259461] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.260052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.260068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.260075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.260298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.260518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.260525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.260532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.264074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.273287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.273845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.273860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.273867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.274085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.274309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.274317] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.274324] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.277864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.287120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.287768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.287805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.287815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.288054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.329 [2024-07-12 01:56:11.288285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.329 [2024-07-12 01:56:11.288294] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.329 [2024-07-12 01:56:11.288302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.329 [2024-07-12 01:56:11.291852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.329 [2024-07-12 01:56:11.301059] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.329 [2024-07-12 01:56:11.301722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.329 [2024-07-12 01:56:11.301759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.329 [2024-07-12 01:56:11.301770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.329 [2024-07-12 01:56:11.302008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.302237] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.302246] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.302254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.305806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.315013] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.315710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.315747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.315758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.316000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.316223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.316239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.316247] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.319797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.329008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.329592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.329610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.329618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.329836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.330056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.330063] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.330070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.333623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.342835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.343509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.343546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.343557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.343795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.344018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.344026] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.344033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.347596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.356816] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.357500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.357537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.357547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.357785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.358008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.358016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.358027] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.361594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.370809] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.371382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.371401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.371409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.371628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.371847] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.371856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.371863] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.375427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.384640] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.385243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.385259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.385267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.385486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.385704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.385712] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.385719] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.389271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.398478] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.399079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.399094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.399101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.399324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.399553] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.399560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.399567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.330 [2024-07-12 01:56:11.403113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.330 [2024-07-12 01:56:11.412327] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.330 [2024-07-12 01:56:11.412921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.330 [2024-07-12 01:56:11.412939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.330 [2024-07-12 01:56:11.412946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.330 [2024-07-12 01:56:11.413165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.330 [2024-07-12 01:56:11.413389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.330 [2024-07-12 01:56:11.413397] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.330 [2024-07-12 01:56:11.413404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.416950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.426157] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.426688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.426703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.426711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.426929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.427147] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.427155] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.427162] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.430710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.440129] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.440705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.440720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.440727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.440945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.441163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.441172] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.441178] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.444728] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.453941] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.454519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.454556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.454568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.454806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.455033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.455041] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.455048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.458613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.467831] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.468527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.468564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.468575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.468813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.469036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.469044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.469052] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.472610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.481826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.482541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.482578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.482589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.482827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.483050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.483058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.483065] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.486625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.495633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.496295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.496332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.496342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.496581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.496804] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.496812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.496820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.500382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.509593] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.510279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.510317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.510328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.510570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.510793] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.510801] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.510808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.514366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.523575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.524256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.331 [2024-07-12 01:56:11.524292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.331 [2024-07-12 01:56:11.524302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.331 [2024-07-12 01:56:11.524541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.331 [2024-07-12 01:56:11.524763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.331 [2024-07-12 01:56:11.524771] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.331 [2024-07-12 01:56:11.524779] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.331 [2024-07-12 01:56:11.528335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.331 [2024-07-12 01:56:11.537536] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.331 [2024-07-12 01:56:11.538102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.538139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.538149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.538397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.538620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.538628] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.538635] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.542183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.551387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.552053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.552090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.552105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.552353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.552576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.552584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.552592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.556139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.565344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.566020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.566056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.566067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.566314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.566538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.566546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.566553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.570100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.579310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.580005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.580041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.580051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.580300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.580524] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.580532] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.580540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.584088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.593286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.593916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.593953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.593963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.594201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.594433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.594446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.594454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.598002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.607211] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.607820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.607838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.607846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.608065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.608290] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.608299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.608305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.611847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.621037] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.621496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.621512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.621519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.621737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.621956] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.621963] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.621970] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.625516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.634919] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.635512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.635528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.635535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.635754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.635972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.635979] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.635986] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.639530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.648736] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.649441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.649477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.649488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.332 [2024-07-12 01:56:11.649726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.332 [2024-07-12 01:56:11.649949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.332 [2024-07-12 01:56:11.649957] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.332 [2024-07-12 01:56:11.649964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.332 [2024-07-12 01:56:11.653521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.332 [2024-07-12 01:56:11.662717] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.332 [2024-07-12 01:56:11.663305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.332 [2024-07-12 01:56:11.663342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.332 [2024-07-12 01:56:11.663353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.333 [2024-07-12 01:56:11.663591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.333 [2024-07-12 01:56:11.663813] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.333 [2024-07-12 01:56:11.663822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.333 [2024-07-12 01:56:11.663829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.333 [2024-07-12 01:56:11.667389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.676610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.677321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.677358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.677370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.677612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.677835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.595 [2024-07-12 01:56:11.677843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.595 [2024-07-12 01:56:11.677850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.595 [2024-07-12 01:56:11.681412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.690402] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.690965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.691001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.691012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.691269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.691493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.595 [2024-07-12 01:56:11.691501] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.595 [2024-07-12 01:56:11.691508] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.595 [2024-07-12 01:56:11.695060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.704270] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.704878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.704915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.704926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.705164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.705395] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.595 [2024-07-12 01:56:11.705405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.595 [2024-07-12 01:56:11.705412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.595 [2024-07-12 01:56:11.708963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.718171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.718825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.718862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.718873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.719113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.719345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.595 [2024-07-12 01:56:11.719353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.595 [2024-07-12 01:56:11.719361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.595 [2024-07-12 01:56:11.722910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.732115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.732679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.732697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.732705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.732924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.733143] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.595 [2024-07-12 01:56:11.733150] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.595 [2024-07-12 01:56:11.733165] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.595 [2024-07-12 01:56:11.736716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.745917] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.746546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.746583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.746593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.746831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.747054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.595 [2024-07-12 01:56:11.747063] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.595 [2024-07-12 01:56:11.747070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.595 [2024-07-12 01:56:11.750629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.595 [2024-07-12 01:56:11.759834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.595 [2024-07-12 01:56:11.760434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.595 [2024-07-12 01:56:11.760471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.595 [2024-07-12 01:56:11.760482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.595 [2024-07-12 01:56:11.760720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.595 [2024-07-12 01:56:11.760943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.760951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.760958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.764515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.773721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.774176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.774196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.774203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.774430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.774650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.774658] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.774665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.778212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.787626] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.788206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.788221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.788234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.788454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.788672] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.788679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.788686] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.792226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.801415] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.801968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.801983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.801990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.802208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.802432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.802440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.802447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.805985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.815386] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.816018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.816055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.816065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.816314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.816538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.816546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.816553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.820101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.829303] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.829993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.830029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.830041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.830293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.830521] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.830529] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.830536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.834084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.843286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.843978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.844014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.844025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.844273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.844497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.844505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.844512] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.848059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.857295] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.857864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.857882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.857889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.858108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.858334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.858342] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.858349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.861894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.871091] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.871771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.871808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.871818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.872057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.872288] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.872297] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.872305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.875871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.885070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.885639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.885657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.885664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.885884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.886102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.886110] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.886117] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.889662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.596 [2024-07-12 01:56:11.899061] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.596 [2024-07-12 01:56:11.899615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.596 [2024-07-12 01:56:11.899631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.596 [2024-07-12 01:56:11.899638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.596 [2024-07-12 01:56:11.899856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.596 [2024-07-12 01:56:11.900074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.596 [2024-07-12 01:56:11.900082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.596 [2024-07-12 01:56:11.900089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.596 [2024-07-12 01:56:11.903635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.597 [2024-07-12 01:56:11.913033] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.597 [2024-07-12 01:56:11.913607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.597 [2024-07-12 01:56:11.913622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.597 [2024-07-12 01:56:11.913629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.597 [2024-07-12 01:56:11.913847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.597 [2024-07-12 01:56:11.914065] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.597 [2024-07-12 01:56:11.914072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.597 [2024-07-12 01:56:11.914079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.597 [2024-07-12 01:56:11.917623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.597 [2024-07-12 01:56:11.926845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.597 [2024-07-12 01:56:11.927442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.597 [2024-07-12 01:56:11.927462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.597 [2024-07-12 01:56:11.927469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.597 [2024-07-12 01:56:11.927688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.597 [2024-07-12 01:56:11.927907] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.597 [2024-07-12 01:56:11.927914] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.597 [2024-07-12 01:56:11.927921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.597 [2024-07-12 01:56:11.931464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.597 [2024-07-12 01:56:11.940656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.597 [2024-07-12 01:56:11.941210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.597 [2024-07-12 01:56:11.941225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.597 [2024-07-12 01:56:11.941238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.597 [2024-07-12 01:56:11.941457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.597 [2024-07-12 01:56:11.941677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.597 [2024-07-12 01:56:11.941685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.597 [2024-07-12 01:56:11.941692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.597 [2024-07-12 01:56:11.945239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.858 [2024-07-12 01:56:11.954447] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.858 [2024-07-12 01:56:11.954996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-07-12 01:56:11.955011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.858 [2024-07-12 01:56:11.955018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.858 [2024-07-12 01:56:11.955243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:11.955462] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:11.955470] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:11.955477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:11.959022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:11.968531] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:11.968938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:11.968957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:11.968965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:11.969185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:11.969417] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:11.969425] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:11.969432] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:11.972981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:11.982417] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:11.982974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:11.982989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:11.982997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:11.983215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:11.983440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:11.983449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:11.983455] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:11.987002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:11.996210] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:11.996871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:11.996907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:11.996918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:11.997156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:11.997386] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:11.997396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:11.997403] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.000950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.010350] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.010993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.011029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.011040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.011288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.011511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:12.011520] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:12.011528] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.015075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.024291] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.024861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.024878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.024886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.025105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.025331] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:12.025340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:12.025347] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.028892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.038088] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.038779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.038816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.038827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.039065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.039296] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:12.039306] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:12.039313] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.042862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.051898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.052567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.052604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.052615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.052853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.053076] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:12.053084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:12.053091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.056649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.065855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.066553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.066590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.066605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.066843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.067066] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:12.067074] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:12.067082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.070639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.079656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.080262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.080298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.080309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.080547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.080771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.859 [2024-07-12 01:56:12.080780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.859 [2024-07-12 01:56:12.080788] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.859 [2024-07-12 01:56:12.084341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.859 [2024-07-12 01:56:12.093544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.859 [2024-07-12 01:56:12.094258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-07-12 01:56:12.094295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.859 [2024-07-12 01:56:12.094306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.859 [2024-07-12 01:56:12.094544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.859 [2024-07-12 01:56:12.094767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.094775] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.094783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.098338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.107544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.108161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.108197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.108209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.108458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.108682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.108694] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.108701] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.112258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.121460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.122146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.122183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.122195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.122444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.122668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.122676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.122683] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.126233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.135432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.136078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.136115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.136125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.136373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.136596] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.136605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.136612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.140159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.149362] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.149948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.149966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.149973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.150192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.150418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.150426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.150433] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.153974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.163168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.163719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.163735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.163742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.163961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.164179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.164188] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.164194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.167742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.177151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.177792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.177829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.177839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.178077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.178310] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.178319] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.178326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.181877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.191075] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.191668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.191705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.191717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.191956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.192179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.192187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.192195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.195757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:45.860 [2024-07-12 01:56:12.204966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:45.860 [2024-07-12 01:56:12.205644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-07-12 01:56:12.205681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:45.860 [2024-07-12 01:56:12.205691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:45.860 [2024-07-12 01:56:12.205934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:45.860 [2024-07-12 01:56:12.206156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.860 [2024-07-12 01:56:12.206164] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:45.860 [2024-07-12 01:56:12.206172] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.860 [2024-07-12 01:56:12.209732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.218943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.219626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.219663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.219675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.219917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.220140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.220148] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.220156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.223713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.232918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.233503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.233540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.233552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.233794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.234016] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.234024] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.234032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.237588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.246783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.247486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.247523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.247533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.247771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.247994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.248002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.248013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.251571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.260769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.261475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.261512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.261522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.261761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.261983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.261991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.261999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.265555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.274757] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.275453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.275490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.275500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.275739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.275961] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.275969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.275977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.279533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.288737] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.289445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.289482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.289492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.289731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.289954] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.289962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.289969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.293527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.302723] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.303390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.303427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.303437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.303675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.303898] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.123 [2024-07-12 01:56:12.303906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.123 [2024-07-12 01:56:12.303913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.123 [2024-07-12 01:56:12.307468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.123 [2024-07-12 01:56:12.316664] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.123 [2024-07-12 01:56:12.317317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.123 [2024-07-12 01:56:12.317354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.123 [2024-07-12 01:56:12.317366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.123 [2024-07-12 01:56:12.317608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.123 [2024-07-12 01:56:12.317831] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.317839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.317847] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.321404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.330603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.331296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.331332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.331343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.331581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.331804] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.331812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.331819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.335383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.344604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.345215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.345241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.345249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.345468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.345691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.345699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.345706] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.349256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.358465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.358959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.358975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.358982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.359201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.359427] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.359435] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.359441] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.362986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.372410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.372960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.372975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.372982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.373200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.373434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.373443] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.373449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.376995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.386201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.386647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.386664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.386671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.386891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.387109] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.387117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.387124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.390679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.400100] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.400654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.400669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.400677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.400895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.401114] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.401121] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.401127] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.404680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.413891] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.414426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.414442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.414449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.414667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.414886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.414893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.414900] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.418449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.427865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.428517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.428554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.428565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.428803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.429026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.429034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.429041] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.432601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.441815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.442548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.442584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.442603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.442841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.443063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.443072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.443079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.446635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.455633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.456292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.456329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.456341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.456582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.124 [2024-07-12 01:56:12.456804] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.124 [2024-07-12 01:56:12.456813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.124 [2024-07-12 01:56:12.456821] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.124 [2024-07-12 01:56:12.460378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.124 [2024-07-12 01:56:12.469578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.124 [2024-07-12 01:56:12.470187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.124 [2024-07-12 01:56:12.470204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.124 [2024-07-12 01:56:12.470212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.124 [2024-07-12 01:56:12.470438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.125 [2024-07-12 01:56:12.470658] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.125 [2024-07-12 01:56:12.470665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.125 [2024-07-12 01:56:12.470672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.125 [2024-07-12 01:56:12.474226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.387 [2024-07-12 01:56:12.483438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.387 [2024-07-12 01:56:12.484104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.387 [2024-07-12 01:56:12.484141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.387 [2024-07-12 01:56:12.484151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.387 [2024-07-12 01:56:12.484398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.387 [2024-07-12 01:56:12.484622] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.387 [2024-07-12 01:56:12.484634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.387 [2024-07-12 01:56:12.484642] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.387 [2024-07-12 01:56:12.488192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.387 [2024-07-12 01:56:12.497403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.387 [2024-07-12 01:56:12.498087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.387 [2024-07-12 01:56:12.498123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.387 [2024-07-12 01:56:12.498134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.387 [2024-07-12 01:56:12.498381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.387 [2024-07-12 01:56:12.498605] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.387 [2024-07-12 01:56:12.498613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.387 [2024-07-12 01:56:12.498620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.387 [2024-07-12 01:56:12.502169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.387 [2024-07-12 01:56:12.511380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.387 [2024-07-12 01:56:12.512044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.387 [2024-07-12 01:56:12.512081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.387 [2024-07-12 01:56:12.512091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.387 [2024-07-12 01:56:12.512338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.387 [2024-07-12 01:56:12.512562] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.387 [2024-07-12 01:56:12.512571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.387 [2024-07-12 01:56:12.512578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.387 [2024-07-12 01:56:12.516126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.387 [2024-07-12 01:56:12.525332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.387 [2024-07-12 01:56:12.526002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.387 [2024-07-12 01:56:12.526039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.387 [2024-07-12 01:56:12.526049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.526294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.526518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.526526] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.526534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.530081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.539294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.539984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.540021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.540031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.540278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.540502] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.540510] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.540518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.544067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.553276] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.553977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.554014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.554024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.554270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.554494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.554502] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.554510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.558057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.567099] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.567630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.567648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.567656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.567875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.568093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.568101] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.568108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.571661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.581084] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.581558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.581574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.581585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.581804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.582023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.582031] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.582038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.585586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.594996] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.595550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.595566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.595573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.595791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.596009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.596017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.596024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.599569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.608999] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.609533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.609548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.609555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.609773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.609991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.609999] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.610006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.388 [2024-07-12 01:56:12.613563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.388 [2024-07-12 01:56:12.622970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.388 [2024-07-12 01:56:12.623561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.388 [2024-07-12 01:56:12.623577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.388 [2024-07-12 01:56:12.623584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.388 [2024-07-12 01:56:12.623802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.388 [2024-07-12 01:56:12.624021] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.388 [2024-07-12 01:56:12.624033] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.388 [2024-07-12 01:56:12.624039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.627589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.636787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.637369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.637405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.637417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.637657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.637880] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.637889] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.637896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.641453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.650653] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.651133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.651150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.651158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.651384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.651603] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.651611] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.651617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.655162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.664571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.665220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.665264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.665275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.665513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.665736] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.665744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.665751] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.669303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.678519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.679079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.679116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.679127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.679373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.679596] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.679604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.679612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.683161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.692367] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.692951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.692968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.692976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.693195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.693421] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.693429] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.693436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.696980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.706185] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.706737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.706753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.706760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.706978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.707197] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.707204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.707211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.710761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.720171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.720814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.720851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.720862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.721104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.721334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.721344] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.721351] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.724902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.389 [2024-07-12 01:56:12.734107] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.389 [2024-07-12 01:56:12.734671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.389 [2024-07-12 01:56:12.734709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.389 [2024-07-12 01:56:12.734720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.389 [2024-07-12 01:56:12.734960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.389 [2024-07-12 01:56:12.735183] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.389 [2024-07-12 01:56:12.735191] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.389 [2024-07-12 01:56:12.735199] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.389 [2024-07-12 01:56:12.738755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.651 [2024-07-12 01:56:12.747962] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.651 [2024-07-12 01:56:12.749321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.651 [2024-07-12 01:56:12.749346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.651 [2024-07-12 01:56:12.749356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.651 [2024-07-12 01:56:12.749583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.651 [2024-07-12 01:56:12.749803] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.749811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.749818] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.753371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.761946] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.762544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.762561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.762569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.762788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.763006] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.763014] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.763025] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.766573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.775803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.776369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.776405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.776417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.776656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.776878] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.776886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.776894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.780451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.789657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.790268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.790287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.790294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.790513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.790732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.790740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.790747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.794297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.803494] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.804055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.804070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.804077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.804301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.804520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.804528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.804534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.808080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.817285] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.817931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.817971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.817982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.818220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.818452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.818461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.818469] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.822019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.831226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.831893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.831930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.831940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.832178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.832409] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.832418] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.832426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.835973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.845178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.845658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.845676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.845683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.845902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.846121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.846128] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.846135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.849682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.859093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.859673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.859688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.859696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.859914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.860137] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.860145] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.860152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.863699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.872901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.873571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.873608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.873620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.873861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.874083] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.874091] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.874099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.877666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.886874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.887539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.887575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.887586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.887824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.888047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.652 [2024-07-12 01:56:12.888055] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.652 [2024-07-12 01:56:12.888062] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.652 [2024-07-12 01:56:12.891616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.652 [2024-07-12 01:56:12.900824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.652 [2024-07-12 01:56:12.901414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.652 [2024-07-12 01:56:12.901433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.652 [2024-07-12 01:56:12.901441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.652 [2024-07-12 01:56:12.901660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.652 [2024-07-12 01:56:12.901879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.901886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.901893] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.905448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.914654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.915329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.915366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.915377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.915615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.915837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.915846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.915853] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.919411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.928619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.929311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.929348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.929360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.929601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.929823] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.929831] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.929839] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.933394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.942598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.943225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.943270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.943281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.943519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.943742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.943750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.943758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.947313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.956520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.957123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.957141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.957153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.957377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.957597] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.957605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.957612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.961156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.970362] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.971011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.971047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.971058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.971304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.971528] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.971536] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.971544] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.975101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.984313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.984891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.984909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.984916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.985135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.985358] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.985367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.985374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:12.988914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.653 [2024-07-12 01:56:12.998195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.653 [2024-07-12 01:56:12.998879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.653 [2024-07-12 01:56:12.998916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.653 [2024-07-12 01:56:12.998927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.653 [2024-07-12 01:56:12.999165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.653 [2024-07-12 01:56:12.999396] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.653 [2024-07-12 01:56:12.999409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.653 [2024-07-12 01:56:12.999417] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.653 [2024-07-12 01:56:13.002966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.012409] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.013011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.013030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.013037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.013264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.013484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.013493] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.013500] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.017046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.026248] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.027313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.027337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.027345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.027569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.027789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.027797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.027804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.031356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.040154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.040778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.040815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.040825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.041064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.041295] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.041305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.041312] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.044862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.054072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.054713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.054731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.054738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.054958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.055177] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.055184] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.055191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.058741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.067948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.068551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.068567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.068574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.068794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.069013] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.069021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.069027] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.072719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.081942] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.082584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.082622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.082632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.082870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.083093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.083102] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.083109] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.086669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.095878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.096524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.096562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.096572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.096815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.097038] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.097047] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.097055] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.100610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.109822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.110405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.110424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.110432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.110651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.110870] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.110878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.110884] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.114434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.123638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.124147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.124163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.124170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.124394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.124613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.124621] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.124628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.128169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.137602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.138140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.138176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.138188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.138440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.138665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.138673] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.138685] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.142238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.151448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.152145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.152182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.152194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.152441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.152665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.152673] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.152680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.156228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.165436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.166038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.166057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.166064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.166288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.166508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.166515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.166522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.170065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.179283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.179875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.179891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.179898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.916 [2024-07-12 01:56:13.180116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.916 [2024-07-12 01:56:13.180340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.916 [2024-07-12 01:56:13.180349] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.916 [2024-07-12 01:56:13.180356] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.916 [2024-07-12 01:56:13.183897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.916 [2024-07-12 01:56:13.193098] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.916 [2024-07-12 01:56:13.193757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-07-12 01:56:13.193794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.916 [2024-07-12 01:56:13.193806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.917 [2024-07-12 01:56:13.194046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.917 [2024-07-12 01:56:13.194277] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.917 [2024-07-12 01:56:13.194286] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.917 [2024-07-12 01:56:13.194293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.917 [2024-07-12 01:56:13.197842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.917 [2024-07-12 01:56:13.207049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.917 [2024-07-12 01:56:13.207727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-07-12 01:56:13.207765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.917 [2024-07-12 01:56:13.207775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.917 [2024-07-12 01:56:13.208013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.917 [2024-07-12 01:56:13.208243] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.917 [2024-07-12 01:56:13.208252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.917 [2024-07-12 01:56:13.208259] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.917 [2024-07-12 01:56:13.211809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.917 [2024-07-12 01:56:13.221014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.917 [2024-07-12 01:56:13.221608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-07-12 01:56:13.221644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.917 [2024-07-12 01:56:13.221655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.917 [2024-07-12 01:56:13.221893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.917 [2024-07-12 01:56:13.222116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.917 [2024-07-12 01:56:13.222124] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.917 [2024-07-12 01:56:13.222131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.917 [2024-07-12 01:56:13.225695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.917 [2024-07-12 01:56:13.234903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.917 [2024-07-12 01:56:13.235467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-07-12 01:56:13.235486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.917 [2024-07-12 01:56:13.235493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.917 [2024-07-12 01:56:13.235717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.917 [2024-07-12 01:56:13.235937] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.917 [2024-07-12 01:56:13.235944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.917 [2024-07-12 01:56:13.235951] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.917 [2024-07-12 01:56:13.239498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.917 [2024-07-12 01:56:13.248698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.917 [2024-07-12 01:56:13.249253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-07-12 01:56:13.249269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.917 [2024-07-12 01:56:13.249276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.917 [2024-07-12 01:56:13.249494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.917 [2024-07-12 01:56:13.249713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.917 [2024-07-12 01:56:13.249720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.917 [2024-07-12 01:56:13.249727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.917 [2024-07-12 01:56:13.253273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:46.917 [2024-07-12 01:56:13.262682] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:46.917 [2024-07-12 01:56:13.263241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-07-12 01:56:13.263256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:46.917 [2024-07-12 01:56:13.263263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:46.917 [2024-07-12 01:56:13.263482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:46.917 [2024-07-12 01:56:13.263700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:46.917 [2024-07-12 01:56:13.263707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:46.917 [2024-07-12 01:56:13.263714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:46.917 [2024-07-12 01:56:13.267302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.276512] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.277152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.277189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.277201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.277447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.277671] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.277680] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.277691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.281243] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.290446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.291141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.291178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.291188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.291435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.291658] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.291666] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.291674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.295223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.304428] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.305118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.305155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.305165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.305412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.305635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.305643] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.305650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.309197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.318401] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.319096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.319133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.319143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.319391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.319614] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.319622] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.319629] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.323179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.332389] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.333085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.333125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.333136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.333383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.333607] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.333614] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.333622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.337168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.346373] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.347062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.347098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.347109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.347357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.347580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.347589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.347596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.351144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.360349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.360991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.361028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.361038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.361285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.361508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.361516] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.361524] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.365072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.374275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.374966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.375003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.375014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.375269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.375504] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.375513] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.375520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.379067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.388062] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.388629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.388648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.388656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.388875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.389094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.389102] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.389108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.392658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.401854] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.402432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.402448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.179 [2024-07-12 01:56:13.402455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.179 [2024-07-12 01:56:13.402674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.179 [2024-07-12 01:56:13.402892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.179 [2024-07-12 01:56:13.402900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.179 [2024-07-12 01:56:13.402906] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.179 [2024-07-12 01:56:13.406452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.179 [2024-07-12 01:56:13.415645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.179 [2024-07-12 01:56:13.416220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.179 [2024-07-12 01:56:13.416240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.416248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.416465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.416683] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.416692] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.416699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.420250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.429444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.430097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.430133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.430144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.430392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.430616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.430623] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.430631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.434184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.443388] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.444061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.444098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.444108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.444355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.444578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.444586] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.444593] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.448140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.457348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.458018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.458055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.458066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.458311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.458535] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.458543] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.458550] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.462097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.471304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.471794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.471812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.471823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.472043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.472269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.472277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.472284] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.475826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.485243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.485832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.485847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.485854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.486072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.486296] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.486305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.486312] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.489853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.499046] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.499721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.499758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.499769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.500007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.500238] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.500247] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.500255] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.503805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.513008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.513563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.513581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.513588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.513808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.514026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.514038] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.514044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.517593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.180 [2024-07-12 01:56:13.526997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.180 [2024-07-12 01:56:13.527631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.180 [2024-07-12 01:56:13.527646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.180 [2024-07-12 01:56:13.527653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.180 [2024-07-12 01:56:13.527872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.180 [2024-07-12 01:56:13.528090] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.180 [2024-07-12 01:56:13.528097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.180 [2024-07-12 01:56:13.528104] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.180 [2024-07-12 01:56:13.531652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.540849] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.541515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.541552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.541562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.541800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.542023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.542031] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.542039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.545596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.554796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.555500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.555537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.555548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.555786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.556009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.556017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.556024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.559580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.568785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.569532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.569569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.569579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.569818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.570040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.570048] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.570056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.573612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.582613] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.583248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.583291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.583302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.583539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.583762] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.583770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.583777] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.587335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.596535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.597219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.597261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.597272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.597510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.597732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.597740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.597748] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.601301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.610501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.611170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.611206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.611217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.611467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.611691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.611699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.611706] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.615256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.624455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.625128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.625165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.625175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.625423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.625646] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.625655] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.625662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.629210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.638414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.639107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.639143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.639153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.639400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.639624] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.639632] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.639639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.643194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.652397] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.652959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.652995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.442 [2024-07-12 01:56:13.653006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.442 [2024-07-12 01:56:13.653253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.442 [2024-07-12 01:56:13.653476] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.442 [2024-07-12 01:56:13.653485] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.442 [2024-07-12 01:56:13.653496] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.442 [2024-07-12 01:56:13.657045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.442 [2024-07-12 01:56:13.666250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.442 [2024-07-12 01:56:13.666942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.442 [2024-07-12 01:56:13.666978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.666989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.667227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.667460] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.667469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.667476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.671024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.680239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.680887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.680924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.680934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.681172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.681404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.681413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.681420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.684969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.694168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.694763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.694780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.694788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.695006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.695225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.695239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.695246] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.698788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.707985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.708565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.708582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.708589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.708808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.709026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.709033] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.709040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.712585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.721776] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.722444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.722480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.722491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.722729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.722952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.722960] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.722967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.726524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.735726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.736337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.736374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.736386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.736627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.736849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.736858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.736865] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.740422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.749627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.750325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.750362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.750374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.750619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.750842] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.750850] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.750857] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.754415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.763620] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.764273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.764311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.764323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.764565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.764787] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.764795] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.764803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.768361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.777571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.778278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.778315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.778325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.778563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.778785] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.778794] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.778801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.782357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.443 [2024-07-12 01:56:13.791560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.443 [2024-07-12 01:56:13.792255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.443 [2024-07-12 01:56:13.792292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.443 [2024-07-12 01:56:13.792304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.443 [2024-07-12 01:56:13.792544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.443 [2024-07-12 01:56:13.792767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.443 [2024-07-12 01:56:13.792775] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.443 [2024-07-12 01:56:13.792783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.443 [2024-07-12 01:56:13.796346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.805550] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.706 [2024-07-12 01:56:13.806268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.706 [2024-07-12 01:56:13.806305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.706 [2024-07-12 01:56:13.806316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.706 [2024-07-12 01:56:13.806554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.706 [2024-07-12 01:56:13.806776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.706 [2024-07-12 01:56:13.806784] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.706 [2024-07-12 01:56:13.806792] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.706 [2024-07-12 01:56:13.810349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.819343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.706 [2024-07-12 01:56:13.820007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.706 [2024-07-12 01:56:13.820044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.706 [2024-07-12 01:56:13.820054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.706 [2024-07-12 01:56:13.820300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.706 [2024-07-12 01:56:13.820524] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.706 [2024-07-12 01:56:13.820532] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.706 [2024-07-12 01:56:13.820540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.706 [2024-07-12 01:56:13.824086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.833289] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.706 [2024-07-12 01:56:13.833941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.706 [2024-07-12 01:56:13.833978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.706 [2024-07-12 01:56:13.833990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.706 [2024-07-12 01:56:13.834240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.706 [2024-07-12 01:56:13.834464] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.706 [2024-07-12 01:56:13.834472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.706 [2024-07-12 01:56:13.834479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.706 [2024-07-12 01:56:13.838030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.847233] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.706 [2024-07-12 01:56:13.847800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.706 [2024-07-12 01:56:13.847821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.706 [2024-07-12 01:56:13.847829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.706 [2024-07-12 01:56:13.848048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.706 [2024-07-12 01:56:13.848277] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.706 [2024-07-12 01:56:13.848285] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.706 [2024-07-12 01:56:13.848293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.706 [2024-07-12 01:56:13.851839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.861034] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.706 [2024-07-12 01:56:13.861568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.706 [2024-07-12 01:56:13.861583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.706 [2024-07-12 01:56:13.861591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.706 [2024-07-12 01:56:13.861809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.706 [2024-07-12 01:56:13.862028] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.706 [2024-07-12 01:56:13.862035] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.706 [2024-07-12 01:56:13.862041] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.706 [2024-07-12 01:56:13.865590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.874993] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.706 [2024-07-12 01:56:13.875550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.706 [2024-07-12 01:56:13.875565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.706 [2024-07-12 01:56:13.875572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.706 [2024-07-12 01:56:13.875790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.706 [2024-07-12 01:56:13.876008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.706 [2024-07-12 01:56:13.876016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.706 [2024-07-12 01:56:13.876022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.706 [2024-07-12 01:56:13.879580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.706 [2024-07-12 01:56:13.888774] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 [2024-07-12 01:56:13.889358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.889373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.889380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.889598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.889820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.889827] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.707 [2024-07-12 01:56:13.889834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.707 [2024-07-12 01:56:13.893378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.707 [2024-07-12 01:56:13.902574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 [2024-07-12 01:56:13.903217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.903260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.903271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.903509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.903732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.903740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.707 [2024-07-12 01:56:13.903748] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.707 [2024-07-12 01:56:13.907301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.707 [2024-07-12 01:56:13.916500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 [2024-07-12 01:56:13.917182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.917218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.917228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.917475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.917698] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.917706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.707 [2024-07-12 01:56:13.917713] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.707 [2024-07-12 01:56:13.921263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 66507 Killed "${NVMF_APP[@]}" "$@" 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:47.707 [2024-07-12 01:56:13.930464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:47.707 [2024-07-12 01:56:13.931134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.931171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.931183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.931431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.931659] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.931668] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.707 [2024-07-12 01:56:13.931675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.707 [2024-07-12 01:56:13.935220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=68179 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 68179 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 68179 ']' 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:47.707 01:56:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:47.707 [2024-07-12 01:56:13.944423] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 [2024-07-12 01:56:13.944999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.945042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.945056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.945302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.945526] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.945534] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.707 [2024-07-12 01:56:13.945542] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.707 [2024-07-12 01:56:13.949089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.707 [2024-07-12 01:56:13.958304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 [2024-07-12 01:56:13.958863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.958881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.958889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.959108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.959333] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.959343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.707 [2024-07-12 01:56:13.959350] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.707 [2024-07-12 01:56:13.962897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.707 [2024-07-12 01:56:13.972099] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.707 [2024-07-12 01:56:13.972634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.707 [2024-07-12 01:56:13.972671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.707 [2024-07-12 01:56:13.972683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.707 [2024-07-12 01:56:13.972923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.707 [2024-07-12 01:56:13.973146] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.707 [2024-07-12 01:56:13.973155] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:13.973163] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:13.976719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.708 [2024-07-12 01:56:13.985936] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.708 [2024-07-12 01:56:13.986609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.708 [2024-07-12 01:56:13.986647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.708 [2024-07-12 01:56:13.986658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.708 [2024-07-12 01:56:13.986896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.708 [2024-07-12 01:56:13.987119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.708 [2024-07-12 01:56:13.987128] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:13.987136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:13.990691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.708 [2024-07-12 01:56:13.992606] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:47.708 [2024-07-12 01:56:13.992651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:47.708 [2024-07-12 01:56:13.999897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.708 [2024-07-12 01:56:14.000592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.708 [2024-07-12 01:56:14.000631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.708 [2024-07-12 01:56:14.000641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.708 [2024-07-12 01:56:14.000880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.708 [2024-07-12 01:56:14.001103] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.708 [2024-07-12 01:56:14.001113] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:14.001121] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:14.004677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.708 [2024-07-12 01:56:14.013874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.708 [2024-07-12 01:56:14.014575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.708 [2024-07-12 01:56:14.014614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.708 [2024-07-12 01:56:14.014624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.708 [2024-07-12 01:56:14.014863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.708 [2024-07-12 01:56:14.015087] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.708 [2024-07-12 01:56:14.015096] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:14.015103] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:14.018662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.708 [2024-07-12 01:56:14.027870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.708 EAL: No free 2048 kB hugepages reported on node 1 00:37:47.708 [2024-07-12 01:56:14.028585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.708 [2024-07-12 01:56:14.028623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.708 [2024-07-12 01:56:14.028634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.708 [2024-07-12 01:56:14.028872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.708 [2024-07-12 01:56:14.029095] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.708 [2024-07-12 01:56:14.029104] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:14.029112] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:14.032750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.708 [2024-07-12 01:56:14.041756] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.708 [2024-07-12 01:56:14.042369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.708 [2024-07-12 01:56:14.042407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.708 [2024-07-12 01:56:14.042420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.708 [2024-07-12 01:56:14.042661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.708 [2024-07-12 01:56:14.042884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.708 [2024-07-12 01:56:14.042893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:14.042900] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:14.046460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.708 [2024-07-12 01:56:14.055666] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.708 [2024-07-12 01:56:14.056227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.708 [2024-07-12 01:56:14.056275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.708 [2024-07-12 01:56:14.056287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.708 [2024-07-12 01:56:14.056533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.708 [2024-07-12 01:56:14.056757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.708 [2024-07-12 01:56:14.056766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.708 [2024-07-12 01:56:14.056774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.708 [2024-07-12 01:56:14.060327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.971 [2024-07-12 01:56:14.069532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.971 [2024-07-12 01:56:14.070109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.971 [2024-07-12 01:56:14.070128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.971 [2024-07-12 01:56:14.070136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.971 [2024-07-12 01:56:14.070360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.971 [2024-07-12 01:56:14.070581] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.971 [2024-07-12 01:56:14.070589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.971 [2024-07-12 01:56:14.070597] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.971 [2024-07-12 01:56:14.074137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.971 [2024-07-12 01:56:14.078526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:47.971 [2024-07-12 01:56:14.083357] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.971 [2024-07-12 01:56:14.083960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.971 [2024-07-12 01:56:14.083977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.971 [2024-07-12 01:56:14.083985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.971 [2024-07-12 01:56:14.084204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.971 [2024-07-12 01:56:14.084429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.084439] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.084447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.087991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.097351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.097920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.097964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.097975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.098219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.098452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.098467] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.098476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.102027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.106957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.972 [2024-07-12 01:56:14.106982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.972 [2024-07-12 01:56:14.106988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:47.972 [2024-07-12 01:56:14.106993] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.972 [2024-07-12 01:56:14.106997] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.972 [2024-07-12 01:56:14.107100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:47.972 [2024-07-12 01:56:14.107294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:47.972 [2024-07-12 01:56:14.107492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.972 [2024-07-12 01:56:14.111243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.111949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.111989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.112000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.112251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.112474] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.112486] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.112494] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.116046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.125050] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.125752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.125792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.125803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.126045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.126277] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.126286] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.126294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.129843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.139057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.139659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.139679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.139692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.139912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.140131] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.140139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.140146] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.143700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.152901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.153588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.153626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.153636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.153878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.154102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.154110] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.154118] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.157677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.166892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.167586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.167623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.167634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.167875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.168097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.168105] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.168113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.171669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.180886] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.181567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.181604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.181615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.181854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.182076] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.182089] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.182097] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.185653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.194860] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.195609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.195642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.195653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.195892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.196115] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.196123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.196131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.199687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.208683] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.209308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.209326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.209334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.209553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.209772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.972 [2024-07-12 01:56:14.209780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.972 [2024-07-12 01:56:14.209787] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.972 [2024-07-12 01:56:14.213336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.972 [2024-07-12 01:56:14.222536] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.972 [2024-07-12 01:56:14.223190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.972 [2024-07-12 01:56:14.223226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.972 [2024-07-12 01:56:14.223246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.972 [2024-07-12 01:56:14.223489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.972 [2024-07-12 01:56:14.223712] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.223720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.223728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.227280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.236484] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.237218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.237261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.237274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.237516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.237739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.237747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.237754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.241308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.250302] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.251007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.251044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.251055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.251301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.251525] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.251533] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.251541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.255087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.264300] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.264990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.265027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.265038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.265283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.265507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.265516] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.265523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.269072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.278293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.278868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.278886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.278894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.279118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.279344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.279352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.279359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.282903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.292101] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.292762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.292800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.292810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.293049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.293279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.293288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.293296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.296845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.306048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.306762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.306799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.306809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.307048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.307280] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.307288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.307296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.310845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.973 [2024-07-12 01:56:14.319845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.973 [2024-07-12 01:56:14.320416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.973 [2024-07-12 01:56:14.320435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:47.973 [2024-07-12 01:56:14.320445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:47.973 [2024-07-12 01:56:14.320664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:47.973 [2024-07-12 01:56:14.320883] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.973 [2024-07-12 01:56:14.320891] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.973 [2024-07-12 01:56:14.320902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.973 [2024-07-12 01:56:14.324449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.333651] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.334244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.334260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.334267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.334485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.334704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.334712] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.334719] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.338267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.347465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.348171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.348208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.348219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.348465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.348688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.348697] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.348704] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.352255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.361460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.362156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.362193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.362204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.362450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.362675] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.362683] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.362691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.366244] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.375263] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.375857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.375902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.375914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.376167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.376399] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.376409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.376417] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.379975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.389180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.389765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.389783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.389791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.390011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.390235] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.390244] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.390251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.393793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.402997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.403688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.403725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.403736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.403974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.404196] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.404204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.404212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.407769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.416974] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.417507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.417526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.417533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.417752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.417976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.417983] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.417990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.421539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.430948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.431434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.431471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.431483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.431725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.431948] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.431956] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.431964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.435520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.444934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.445509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.445547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.445557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.445796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.446018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.446027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.446034] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.449590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.458796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.459492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.459529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.459539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.236 [2024-07-12 01:56:14.459777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.236 [2024-07-12 01:56:14.460000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.236 [2024-07-12 01:56:14.460008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.236 [2024-07-12 01:56:14.460016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.236 [2024-07-12 01:56:14.463577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.236 [2024-07-12 01:56:14.472785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.236 [2024-07-12 01:56:14.473370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.236 [2024-07-12 01:56:14.473389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.236 [2024-07-12 01:56:14.473396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.473616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.473835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.473842] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.473849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.477398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.486609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.487185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.487223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.487242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.487485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.487708] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.487716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.487723] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.491274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.500478] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.501166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.501202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.501214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.501464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.501688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.501696] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.501704] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.505258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.514460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.515027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.515064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.515078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.515324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.515548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.515556] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.515564] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.519112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.528318] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.528939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.528956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.528964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.529183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.529406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.529414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.529422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.532967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.542169] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.542751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.542766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.542774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.542992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.543210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.543219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.543225] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.546819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.556019] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.556454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.556469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.556476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.556695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.556913] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.556924] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.556931] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.560476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.569882] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.570597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.570635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.570646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.570884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.571107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.571115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.571123] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.574681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.237 [2024-07-12 01:56:14.583691] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.237 [2024-07-12 01:56:14.584347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.237 [2024-07-12 01:56:14.584384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.237 [2024-07-12 01:56:14.584396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.237 [2024-07-12 01:56:14.584638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.237 [2024-07-12 01:56:14.584861] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.237 [2024-07-12 01:56:14.584869] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.237 [2024-07-12 01:56:14.584876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.237 [2024-07-12 01:56:14.588436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.597643] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.598348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.598386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.598398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.598640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.598862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.598871] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.598879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.602437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.611440] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.612011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.612029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.612037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.612262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.612482] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.612490] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.612497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.616042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.625245] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.625922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.625959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.625970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.626208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.626439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.626448] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.626456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.630006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.639216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.639799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.639817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.639825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.640044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.640269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.640278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.640285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.643829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.653031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.653724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.653762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.653772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.654015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.654246] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.654255] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.654262] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.657811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.667021] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.667629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.667647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.667655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.667874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.668092] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.668101] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.668107] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.671659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.680878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.681359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.681397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.499 [2024-07-12 01:56:14.681409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.499 [2024-07-12 01:56:14.681651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.499 [2024-07-12 01:56:14.681874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.499 [2024-07-12 01:56:14.681882] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.499 [2024-07-12 01:56:14.681889] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.499 [2024-07-12 01:56:14.685447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.499 [2024-07-12 01:56:14.694861] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.499 [2024-07-12 01:56:14.695454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.499 [2024-07-12 01:56:14.695473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.695480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.695700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.695919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.695934] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.695946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.699499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.708702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.709271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.709308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.709320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.709560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.709783] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.709791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.709799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.713359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.722564] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.723137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.723155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.723162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.723387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.723606] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.723615] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.723622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.727167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.736369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.736984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.736999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.737007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.737225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.737449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.737458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.737464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.741011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.750205] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.750782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.750797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.750804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.751023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.751246] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.751254] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.751261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.754804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.500 [2024-07-12 01:56:14.764003] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.764683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.764720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.764731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.764969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.765192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.765201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.765208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.768765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.777971] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.778561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.778579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.778587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.778806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.779025] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.779032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.779039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.782586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.791793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.792255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.792272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.792279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.792498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.792716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.792725] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.792732] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.796280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.500 [2024-07-12 01:56:14.805688] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.806250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.806265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.806272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.806491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.806710] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.806718] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.806725] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.808742] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.500 [2024-07-12 01:56:14.810273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.500 [2024-07-12 01:56:14.819682] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.820203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.820246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.820257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.820496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.820719] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.820728] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.820740] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.824289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 [2024-07-12 01:56:14.833490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 [2024-07-12 01:56:14.834109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.834126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.500 [2024-07-12 01:56:14.834134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.500 [2024-07-12 01:56:14.834359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.500 [2024-07-12 01:56:14.834578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.500 [2024-07-12 01:56:14.834586] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.500 [2024-07-12 01:56:14.834593] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.500 [2024-07-12 01:56:14.838135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.500 Malloc0 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:48.500 [2024-07-12 01:56:14.847338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.500 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.500 [2024-07-12 01:56:14.847994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.500 [2024-07-12 01:56:14.848032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.501 [2024-07-12 01:56:14.848043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.501 [2024-07-12 01:56:14.848290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.501 [2024-07-12 01:56:14.848513] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.501 [2024-07-12 01:56:14.848522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.501 [2024-07-12 01:56:14.848529] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.501 [2024-07-12 01:56:14.852080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.763 [2024-07-12 01:56:14.861292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.763 [2024-07-12 01:56:14.861865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.763 [2024-07-12 01:56:14.861883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.763 [2024-07-12 01:56:14.861891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.763 [2024-07-12 01:56:14.862110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.763 [2024-07-12 01:56:14.862343] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.763 [2024-07-12 01:56:14.862351] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.763 [2024-07-12 01:56:14.862358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.763 [2024-07-12 01:56:14.865903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.763 [2024-07-12 01:56:14.875107] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.763 [2024-07-12 01:56:14.875796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:48.763 [2024-07-12 01:56:14.875833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12015a0 with addr=10.0.0.2, port=4420 00:37:48.763 [2024-07-12 01:56:14.875845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12015a0 is same with the state(5) to be set 00:37:48.763 [2024-07-12 01:56:14.876087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12015a0 (9): Bad file descriptor 00:37:48.763 [2024-07-12 01:56:14.876318] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.763 [2024-07-12 01:56:14.876327] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:48.763 [2024-07-12 01:56:14.876334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.763 [2024-07-12 01:56:14.877906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.763 [2024-07-12 01:56:14.879893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.763 01:56:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 67162 00:37:48.763 [2024-07-12 01:56:14.889101] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:48.763 [2024-07-12 01:56:14.938408] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:58.762 00:37:58.762 Latency(us) 00:37:58.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.762 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:58.762 Verification LBA range: start 0x0 length 0x4000 00:37:58.762 Nvme1n1 : 15.00 8238.03 32.18 9764.02 0.00 7084.71 580.27 16384.00 00:37:58.762 =================================================================================================================== 00:37:58.762 Total : 8238.03 32.18 9764.02 0.00 7084.71 580.27 16384.00 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:58.762 rmmod nvme_tcp 00:37:58.762 rmmod nvme_fabrics 00:37:58.762 rmmod nvme_keyring 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 68179 ']' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 68179 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 68179 ']' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 68179 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68179 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68179' 00:37:58.762 killing process with pid 68179 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 68179 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 68179 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:58.762 01:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.703 01:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:59.703 00:37:59.703 real 0m28.668s 00:37:59.703 user 1m3.394s 00:37:59.703 sys 0m7.699s 00:37:59.703 01:56:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:59.703 01:56:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.703 ************************************ 00:37:59.703 END TEST nvmf_bdevperf 00:37:59.703 ************************************ 00:37:59.703 01:56:25 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:59.703 01:56:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:59.703 01:56:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:59.703 01:56:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:59.703 ************************************ 00:37:59.703 START TEST nvmf_target_disconnect 00:37:59.703 ************************************ 00:37:59.703 01:56:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:59.703 * Looking for test storage... 00:37:59.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:59.703 01:56:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:07.847 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:07.847 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:07.847 Found net devices under 0000:31:00.0: cvl_0_0 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:07.847 Found net devices under 0000:31:00.1: cvl_0_1 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.847 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:07.848 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:07.848 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:07.848 01:56:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:07.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:07.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:38:07.848 00:38:07.848 --- 10.0.0.2 ping statistics --- 00:38:07.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.848 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:07.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:07.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:38:07.848 00:38:07.848 --- 10.0.0.1 ping statistics --- 00:38:07.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.848 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:07.848 ************************************ 00:38:07.848 START TEST nvmf_target_disconnect_tc1 00:38:07.848 ************************************ 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:07.848 EAL: No free 2048 kB hugepages reported on node 1 00:38:07.848 [2024-07-12 01:56:34.187615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.848 [2024-07-12 01:56:34.187678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1913ff0 with addr=10.0.0.2, port=4420 00:38:07.848 [2024-07-12 01:56:34.187699] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:07.848 [2024-07-12 01:56:34.187709] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:07.848 [2024-07-12 01:56:34.187716] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:38:07.848 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:07.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:07.848 Initializing NVMe Controllers 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:07.848 00:38:07.848 real 0m0.110s 00:38:07.848 user 0m0.044s 00:38:07.848 sys 0m0.065s 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:07.848 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:07.848 ************************************ 00:38:07.848 END TEST nvmf_target_disconnect_tc1 00:38:07.848 ************************************ 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:08.108 ************************************ 00:38:08.108 START TEST nvmf_target_disconnect_tc2 00:38:08.108 ************************************ 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:08.108 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=74616 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 74616 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 74616 ']' 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:08.109 01:56:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.109 [2024-07-12 01:56:34.331262] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:08.109 [2024-07-12 01:56:34.331311] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.109 EAL: No free 2048 kB hugepages reported on node 1 00:38:08.109 [2024-07-12 01:56:34.424402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:08.370 [2024-07-12 01:56:34.472454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.370 [2024-07-12 01:56:34.472509] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.370 [2024-07-12 01:56:34.472517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.370 [2024-07-12 01:56:34.472524] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.370 [2024-07-12 01:56:34.472530] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.370 [2024-07-12 01:56:34.472685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:38:08.370 [2024-07-12 01:56:34.472843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:38:08.370 [2024-07-12 01:56:34.473004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:38:08.370 [2024-07-12 01:56:34.473004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 Malloc0 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 [2024-07-12 01:56:35.190660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 [2024-07-12 01:56:35.230926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=74941 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:08.943 01:56:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:09.204 EAL: No free 2048 kB hugepages reported on node 1 00:38:11.133 01:56:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 74616 00:38:11.133 01:56:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Write completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.133 starting I/O failed 00:38:11.133 Read completed with error (sct=0, sc=8) 00:38:11.134 starting I/O failed 00:38:11.134 Read completed with error (sct=0, sc=8) 00:38:11.134 starting I/O failed 00:38:11.134 Write completed with error (sct=0, sc=8) 00:38:11.134 starting I/O failed 00:38:11.134 Read completed with error (sct=0, sc=8) 00:38:11.134 starting I/O failed 00:38:11.134 [2024-07-12 01:56:37.264027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:11.134 [2024-07-12 01:56:37.264510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.264547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.264836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.264848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.265207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.265220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.265729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.265765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.266068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.266082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.266540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.266575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.266906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.266920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.267205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.267216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.267655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.267692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.268065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.268079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.268504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.268541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.268744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.268757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.269085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.269097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.269442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.269455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.269779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.269791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.270152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.270164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.270516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.270528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.270843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.270855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.271176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.271188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.271505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.271517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.271836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.271848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.272197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.272209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.272447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.272458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.272798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.272809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.273156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.273167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.273466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.273477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.273789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.273800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.274118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.274129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.274486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.274498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.274855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.274866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.275221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.275235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.275589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.275600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.275833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.134 [2024-07-12 01:56:37.275844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.134 qpair failed and we were unable to recover it. 00:38:11.134 [2024-07-12 01:56:37.276169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.276181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.276505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.276516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.276872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.276883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.277224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.277239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.277461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.277474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.277774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.277785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.278111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.278123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.278353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.278365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.278711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.278722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.279089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.279101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.279456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.279468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.279824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.279835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.280196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.280208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.280556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.280568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.280760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.280772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.281114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.281126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.281466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.281478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.281796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.281808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.282178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.282189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.282580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.282594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.282923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.282934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.283279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.283290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.283656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.283667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.284001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.284012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.284361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.284371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.284728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.284739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.285089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.285101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.285460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.285471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.285831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.285842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.286152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.286163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.286510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.286521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.286880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.286891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.287242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.287253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.287661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.287672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.287988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.288002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.288359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.288373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.288732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.288745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.289057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.289070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.135 [2024-07-12 01:56:37.289382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.135 [2024-07-12 01:56:37.289395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.135 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.289751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.289764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.290113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.290126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.290451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.290465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.290819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.290832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.291202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.291215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.291562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.291575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.291887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.291900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.292238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.292252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.292569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.292583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.292925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.292938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.293221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.293239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.293577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.293590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.293939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.293952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.294264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.294278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.294559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.294572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.294849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.294862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.295217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.295237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.295566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.295578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.295886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.295900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.296223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.296241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.296439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.296455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.296763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.296776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.297134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.297146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.297479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.297493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.297812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.297824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.298123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.298136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.298406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.298420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.298756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.298769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.299004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.299016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.299354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.299376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.299677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.299690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.299885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.299898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.300214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.300237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.300583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.300601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.300982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.300999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.301212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.301228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.301606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.136 [2024-07-12 01:56:37.301623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.136 qpair failed and we were unable to recover it. 00:38:11.136 [2024-07-12 01:56:37.301986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.302004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.302322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.302340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.302679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.302695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.302956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.302972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.303318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.303336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.303658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.303674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.304016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.304033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.304356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.304375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.304729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.304746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.305134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.305150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.305492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.305510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.305829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.305845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.306186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.306203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.306504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.306523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.306870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.306887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.307083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.307101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.307472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.307490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.307790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.307808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.308187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.308203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.308579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.308597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.308785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.308802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.309140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.309157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.309528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.309546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.309868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.309889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.310209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.310226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.310564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.310581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.310912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.310930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.311259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.311277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.311626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.311643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.311960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.311978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.312326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.312344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.312671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.312693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.313070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.313091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.313458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.313479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.313733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.313754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.314000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.314024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.314365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.314387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.314762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.314783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.315107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.137 [2024-07-12 01:56:37.315128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.137 qpair failed and we were unable to recover it. 00:38:11.137 [2024-07-12 01:56:37.315486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.315508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.315885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.315906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.316169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.316189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.316554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.316577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.316965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.316986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.317391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.317413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.317755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.317778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.318137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.318159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.318456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.318478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.318827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.318849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.319225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.319264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.319598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.319619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.320002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.320023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.320409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.320431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.320808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.320829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.321212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.321239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.321612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.321633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.322016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.322037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.322339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.322361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.322712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.322734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.323118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.323147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.323404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.323436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.323696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.323725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.324106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.324135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.324569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.324605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.324981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.325010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.325275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.325304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.325668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.325696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.326016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.326045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.326411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.326439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.326803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.326832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.138 [2024-07-12 01:56:37.327210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.138 [2024-07-12 01:56:37.327255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.138 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.327632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.327662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.327919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.327950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.328330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.328360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.328731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.328759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.329121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.329150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.329584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.329613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.329989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.330018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.330395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.330426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.330801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.330830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.331204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.331240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.331639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.331668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.332039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.332068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.332447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.332477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.332710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.332737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.333112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.333139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.333470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.333500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.333911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.333939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.334323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.334352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.334609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.334636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.334992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.335021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.335403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.335432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.335757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.335786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.336155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.336184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.336603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.336632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.337007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.337035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.337385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.337416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.337810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.337839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.338212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.338248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.338501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.338531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.338904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.338932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.339306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.339336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.339706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.339735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.340069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.340104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.340465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.340495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.340855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.340883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.341255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.341285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.139 [2024-07-12 01:56:37.341656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.139 [2024-07-12 01:56:37.341685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.139 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.342044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.342073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.342427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.342456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.342832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.342860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.343104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.343134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.343505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.343534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.343907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.343935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.344295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.344325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.344709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.344737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.345067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.345096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.345435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.345467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.345834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.345862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.346247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.346276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.346678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.346706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.347081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.347109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.347377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.347407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.347769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.347797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.348160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.348190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.348572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.348601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.348961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.348988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.349335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.349364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.349744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.349771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.350130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.350158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.350531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.350561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.350930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.350959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.351197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.351227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.351596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.351625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.352001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.352030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.352288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.352320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.352684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.352713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.353095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.353123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.353494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.353526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.353875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.353903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.354249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.354280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.354656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.354684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.355059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.355087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.355455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.355491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.355849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.355877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.356221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.356260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.140 [2024-07-12 01:56:37.356511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.140 [2024-07-12 01:56:37.356540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.140 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.356895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.356923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.357333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.357364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.357734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.357762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.358129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.358158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.358492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.358522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.358843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.358871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.359238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.359269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.359662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.359690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.360071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.360100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.360446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.360475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.360873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.360901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.361272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.361302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.361671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.361699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.362067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.362095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.362466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.362495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.362852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.362880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.363225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.363262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.363681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.363710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.364073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.364101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.364487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.364516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1dec000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Write completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 Read completed with error (sct=0, sc=8) 00:38:11.141 starting I/O failed 00:38:11.141 [2024-07-12 01:56:37.364734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:11.141 [2024-07-12 01:56:37.365093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.365106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.365472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.365482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.365841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.365849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.366200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.366208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.366557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.366566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.366920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.366928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.367309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.367318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.367676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.367684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.367949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.141 [2024-07-12 01:56:37.367956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.141 qpair failed and we were unable to recover it. 00:38:11.141 [2024-07-12 01:56:37.368174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.368182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.368510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.368518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.368828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.368836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.369236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.369245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.369557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.369565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.369917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.369924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.370256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.370264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.370627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.370635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.370985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.370994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.371166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.371174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.371476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.371484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.371788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.371796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.372139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.372148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.372436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.372445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.372687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.372695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.372971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.372979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.373336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.373344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.373706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.373714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.374040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.374049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.374401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.374409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.374739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.374748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.375121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.375128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.375465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.375474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.375801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.375809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.376216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.376224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.376462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.376470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.376800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.376807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.377105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.377116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.377421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.377430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.377758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.377767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.378090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.378098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.378450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.378459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.378827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.378836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.379158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.142 [2024-07-12 01:56:37.379166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.142 qpair failed and we were unable to recover it. 00:38:11.142 [2024-07-12 01:56:37.379486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.379494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.379815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.379823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.380146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.380154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.380343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.380353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.380683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.380691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.381049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.381058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.381416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.381425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.381755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.381763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.382092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.382100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.382452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.382461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.382833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.382841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.383170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.383177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.383500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.383508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.383850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.383857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.384187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.384195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.384547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.384556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.384883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.384891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.385217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.385226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.385578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.385586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.385788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.385797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.386198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.386206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.386559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.386568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.386887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.386895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.387216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.387224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.387580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.387588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.387858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.387866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.388190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.388197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.388549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.388558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.388830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.388839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.389166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.389174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.389486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.389494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.389822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.389831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.390163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.390171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.390572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.390584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.390905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.390913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.391245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.391254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.391579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.391587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.391827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.391834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.392155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.392163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.392554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.392562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.143 [2024-07-12 01:56:37.392882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.143 [2024-07-12 01:56:37.392891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.143 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.393277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.393285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.393589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.393597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.393924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.393931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.394169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.394176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.394504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.394512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.394842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.394850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.395182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.395190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.395543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.395553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.395786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.395793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.396109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.396117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.396470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.396478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.396797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.396805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.397125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.397132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.397457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.397465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.397789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.397797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.398116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.398124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.398480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.398489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.398652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.398660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.399043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.399051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.399402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.399411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.399750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.399761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.399935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.399943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.400273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.400285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.400607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.400615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.400943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.400950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.401311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.401320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.401554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.401562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.401916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.401923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.402272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.402281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.402624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.402632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.402982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.402991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.403188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.403196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.403519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.403529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.403847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.403855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.404214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.404222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.404586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.404594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.404917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.404925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.405282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.405290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.405691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.405699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.406035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.406051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.144 [2024-07-12 01:56:37.406394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.144 [2024-07-12 01:56:37.406402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.144 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.406720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.406728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.407052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.407059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.407415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.407423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.407747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.407754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.408077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.408085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.408442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.408450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.408788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.408795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.409129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.409137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.409471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.409480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.409806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.409814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.410145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.410154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.410530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.410539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.410891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.410899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.411238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.411246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.411580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.411587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.411823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.411831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.412146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.412155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.412465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.412473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.412795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.412802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.413125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.413133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.413382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.413391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.413581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.413589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.413934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.413942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.414261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.414270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.414583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.414591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.414919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.414927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.415248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.415256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.415576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.415584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.415819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.415826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.416138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.416146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.416478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.416486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.416818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.416828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.417177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.417186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.417506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.417514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.417844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.417853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.418180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.418188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.418512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.418521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.418849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.418857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.419181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.419189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.419509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.419518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.145 qpair failed and we were unable to recover it. 00:38:11.145 [2024-07-12 01:56:37.419870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.145 [2024-07-12 01:56:37.419878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.420200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.420209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.420528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.420537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.420930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.420939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.421263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.421272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.421644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.421652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.421968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.421976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.422298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.422306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.422656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.422664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.422987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.422995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.423314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.423322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.423560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.423567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.423886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.423893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.424073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.424080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.424359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.424366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.424589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.424597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.424912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.424919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.425245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.425253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.425656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.425664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.425905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.425912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.426150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.426158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.426574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.426583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.426768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.426776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.427062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.427070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.427379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.427396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.427740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.427747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.428062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.428070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.428402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.428410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.428719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.428727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.429045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.429053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.429372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.429380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.429770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.429780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.429962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.429970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.430212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.430220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.430546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.430555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.430880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.430889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.431248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.431257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.431576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.431584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.431938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.431946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.432268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.432276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.432629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.146 [2024-07-12 01:56:37.432637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.146 qpair failed and we were unable to recover it. 00:38:11.146 [2024-07-12 01:56:37.432835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.432843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.433244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.433252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.433624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.433633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.433983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.433991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.434318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.434327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.434511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.434519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.434857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.434866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.435220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.435227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.435550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.435558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.435867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.435876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.436196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.436204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.436514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.436522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.436854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.436863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.437187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.437196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.437526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.437535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.437885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.437893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.438236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.438244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.438564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.438572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.438904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.438912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.439304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.439312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.439631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.439640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.440006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.440013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.440337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.440345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.440694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.440701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.441025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.441032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.441407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.441415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.441737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.441744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.442062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.442069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.442391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.442399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.442692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.442699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.443023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.443032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.443383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.443391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.443710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.443717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.444036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.444044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.444370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.147 [2024-07-12 01:56:37.444378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.147 qpair failed and we were unable to recover it. 00:38:11.147 [2024-07-12 01:56:37.444722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.444730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.444967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.444975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.445305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.445314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.445501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.445509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.445835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.445843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.446175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.446182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.446511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.446519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.446848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.446857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.447168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.447176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.447507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.447516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.447832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.447840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.448164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.448171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.448496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.448504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.448830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.448838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.449156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.449164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.449487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.449496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.449856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.449863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.450185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.450194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.450342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.450350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.450525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.450533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.450723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.450731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.451040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.451048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.451367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.451375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.451726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.451734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.451952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.451959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.452312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.452320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.452674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.452682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.453050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.453058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.453405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.453413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.453761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.453768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.454099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.454107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.454428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.454436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.454760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.454769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.455082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.455089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.455464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.455473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.455791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.455802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.456143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.456151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.148 [2024-07-12 01:56:37.456470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.148 [2024-07-12 01:56:37.456479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.148 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.456798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.456806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.457127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.457135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.457472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.457480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.457785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.457793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.458111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.458119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.458470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.458478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.458800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.458808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.459139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.459149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.459495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.459503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.459822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.459830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.460056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.460065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.460254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.460263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.460581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.460589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.460904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.460911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.461243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.461251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.461599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.461607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.461938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.461945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.462267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.462274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.462618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.462626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.462958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.462967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.463289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.463297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.463696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.463703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.464021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.464030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.464313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.464321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.464654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.464663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.464984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.464992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.465318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.465326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.465685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.465692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.465983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.465991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.466308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.466316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.466703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.466711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.467060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.467069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.467386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.467394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.467717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.467726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.468047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.468055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.468354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.468362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.468690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.468699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.469023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.469033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.149 [2024-07-12 01:56:37.469360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.149 [2024-07-12 01:56:37.469368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.149 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.469718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.469725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.470126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.470134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.470370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.470378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.470584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.470592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.470895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.470902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.471223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.471234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.471554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.471562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.471885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.471892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.472242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.472250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.472566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.472574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.472804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.472811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.473000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.473007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.473288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.473295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.473589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.473596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.473788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.473796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.474125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.474133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.474489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.474497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.474846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.474853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.475146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.475153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.475336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.475345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.475679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.475686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.476046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.476054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.476373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.476381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.476616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.476624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.476951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.476960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.477313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.477322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.477678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.477686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.477883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.477891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.478250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.478258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.478420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.478428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.478719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.478727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.479051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.479059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.479387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.150 [2024-07-12 01:56:37.479395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.150 qpair failed and we were unable to recover it. 00:38:11.150 [2024-07-12 01:56:37.479711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.479720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.480042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.480052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.480371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.480380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.480715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.480724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.480950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.480958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.481281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.481291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.481524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.481531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.481852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.481860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.482210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.482218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.482614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.482622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.482941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.482949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.483274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.483282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.483622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.483630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.483862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.483870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.484102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.484110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.484313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.484320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.484546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.484553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.484870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.484878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.485201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.485209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.485529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.485538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.485887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.485895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.486225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.486245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.486579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.486587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.486927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.486935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.479 qpair failed and we were unable to recover it. 00:38:11.479 [2024-07-12 01:56:37.487295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.479 [2024-07-12 01:56:37.487303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.487526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.487534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.487768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.487775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.488093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.488101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.488458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.488467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.488790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.488798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.489118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.489127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.489467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.489474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.489727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.489735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.490055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.490063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.490382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.490391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.490597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.490605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.490966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.490974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.491183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.491190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.491485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.491493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.491814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.491822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.492175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.492183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.492464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.492472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.492803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.492813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.493150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.493159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.493479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.493488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.493822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.493833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.494173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.494182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.494422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.494431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.494675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.494684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.495044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.495052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.495375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.495383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.495733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.495741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.496099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.496106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.496431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.496439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.496773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.496780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.497110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.497118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.497439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.497447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.497796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.497803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.498129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.498136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.498486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.498494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.498848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.498855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.499184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.499191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.499527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.499535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.499875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.499884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.500281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.480 [2024-07-12 01:56:37.500289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.480 qpair failed and we were unable to recover it. 00:38:11.480 [2024-07-12 01:56:37.500611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.500619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.500938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.500945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.501271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.501279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.501584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.501591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.501913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.501921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.502242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.502250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.502576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.502583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.502898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.502906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.503226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.503237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.503544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.503552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.503874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.503881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.504236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.504244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.504421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.504429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.504760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.504767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.504962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.504970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.505294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.505302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.505631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.505639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.505962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.505970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.506300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.506309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.506635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.506642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.506973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.506985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.507318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.507327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.507650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.507658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.507968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.507976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.508299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.508307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.508511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.508519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.508843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.508851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.509160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.509169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.509498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.509516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.509718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.509726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.509932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.509939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.510202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.510209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.510537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.510544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.510865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.510873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.511208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.511216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.511446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.511455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.511781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.511789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.512115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.512122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.512474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.512482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.512848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.512856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.481 qpair failed and we were unable to recover it. 00:38:11.481 [2024-07-12 01:56:37.513186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.481 [2024-07-12 01:56:37.513194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.513525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.513533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.513860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.513868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.514220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.514233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.514561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.514569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.514919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.514926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.515262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.515269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.515623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.515632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.515959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.515967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.516319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.516327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.516656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.516663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.516998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.517007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.517362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.517370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.517726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.517734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.518071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.518079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.518433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.518440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.518762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.518771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.519094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.519102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.519425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.519434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.519727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.519734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.520072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.520080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.520408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.520416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.520761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.520768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.521120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.521128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.521517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.521525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.521868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.521876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.522206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.522213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.522516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.522523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.522857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.522864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.523198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.523207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.523594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.523602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.523963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.523972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.524161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.524170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.524392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.524401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.524751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.524758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.525120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.525128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.525472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.525480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.525820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.525828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.526159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.526167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.526496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.526504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.526832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.526841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.482 qpair failed and we were unable to recover it. 00:38:11.482 [2024-07-12 01:56:37.527061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.482 [2024-07-12 01:56:37.527070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.527423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.527431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.527666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.527674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.527905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.527913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.528267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.528276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.528600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.528607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.528859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.528869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.529165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.529174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.529513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.529522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.529766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.529773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.530094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.530102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.530424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.530433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.530796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.530804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.531008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.531015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.531337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.531345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.531697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.531704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.532036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.532045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.532354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.532361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.532665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.532673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.533023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.533031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.533428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.533437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.533733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.533741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.533943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.533952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.534272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.534281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.534583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.534593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.534920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.534927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.535167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.535175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.535500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.535508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.535864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.535872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.536172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.536181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.536395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.536403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.536692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.536700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.483 [2024-07-12 01:56:37.537019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.483 [2024-07-12 01:56:37.537027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.483 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.537441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.537449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.537769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.537777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.538104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.538112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.538473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.538481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.538567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.538573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.538769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.538776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.539106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.539114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.539451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.539459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.539792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.539801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.540100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.540107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.540329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.540337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.540683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.540691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.541015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.541022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.541259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.541269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.541567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.541576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.541923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.541932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.542148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.542156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.542506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.542515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.542658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.542665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.542987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.542996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.543317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.543325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.543659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.543667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.544023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.544032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.544265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.544273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.544582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.544590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.544934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.544942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.545258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.545267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.545321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.545328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.545665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.545673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.546005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.546013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.546338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.546346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.546573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.546580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.546915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.546923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.547283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.547291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.547475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.547483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.547811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.547820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.548178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.548186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.548397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.548406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.548591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.548600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.548948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.484 [2024-07-12 01:56:37.548957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.484 qpair failed and we were unable to recover it. 00:38:11.484 [2024-07-12 01:56:37.549294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.549303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.549654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.549662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.549894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.549902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.550234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.550243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.550544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.550552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.550869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.550878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.551212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.551220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.551559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.551567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.551890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.551899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.552222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.552233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.552556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.552565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.552878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.552887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.553171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.553179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.553505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.553515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.553870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.553878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.554234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.554243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.554549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.554557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.554884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.554892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.555218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.555226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.555581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.555589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.555949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.555957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.556308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.556315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.556534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.556541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.556740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.556748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.557045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.557053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.557387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.557396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.557568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.557576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.557887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.557895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.558202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.558209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.558533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.558541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.558897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.558905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.559263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.559272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.559356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.559363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.559602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.559609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.559911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.559919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.560227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.560237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.560532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.560540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.560741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.560749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.561040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.561048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.561385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.561393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.561609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.485 [2024-07-12 01:56:37.561616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.485 qpair failed and we were unable to recover it. 00:38:11.485 [2024-07-12 01:56:37.561775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.561783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.562166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.562174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.562510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.562518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.562656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.562663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.562968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.562977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.563316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.563324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.563660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.563668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.564032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.564039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.564386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.564393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.564586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.564593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.564896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.564904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.565063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.565071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.565423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.565433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.565755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.565764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.565972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.565980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.566298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.566306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.566532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.566539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.566867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.566874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.567261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.567269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.567582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.567590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.567916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.567923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.568274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.568282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.568454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.568462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.568783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.568790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.569131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.569138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.569471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.569480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.569815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.569823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.570146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.570155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.570492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.570500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.570698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.570706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.571042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.571049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.571370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.571380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.571733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.571741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.572065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.572073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.572411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.572418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.572771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.572780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.573102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.573110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.573291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.573300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.573500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.573507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.573813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.573821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.574148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.486 [2024-07-12 01:56:37.574156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.486 qpair failed and we were unable to recover it. 00:38:11.486 [2024-07-12 01:56:37.574355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.574363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.574693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.574702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.575051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.575059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.575383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.575390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.575732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.575739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.576061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.576069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.576377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.576385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.576712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.576719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.577044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.577052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.577390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.577398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.577685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.577693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.578023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.578033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.578367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.578376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.578720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.578728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.579075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.579084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.579420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.579428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.579638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.579646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.579930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.579938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.580288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.580296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.580623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.580631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.580837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.580845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.581152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.581162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.581485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.581493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.581814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.581822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.582140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.582149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.582439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.582449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.582658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.582667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.582882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.582891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.583200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.583209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.583569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.583579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.583930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.583938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.584262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.584270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.584590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.584599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.584930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.584938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.585296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.585305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.585602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.585610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.487 [2024-07-12 01:56:37.585886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.487 [2024-07-12 01:56:37.585895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.487 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.586226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.586246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.586582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.586591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.586916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.586926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.587276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.587285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.587628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.587636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.587988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.587997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.588194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.588203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.588532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.588541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.588807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.588817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.589170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.589179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.589504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.589512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.589831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.589840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.590163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.590172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.590514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.590523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.590843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.590853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.591174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.591182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.591509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.591519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.591868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.591876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.592197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.592206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.592529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.592539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.592865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.592873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.593100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.593109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.593518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.593527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.593851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.593861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.594188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.594196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.594549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.594558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.594894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.594902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.595271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.595280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.595611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.595619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.595914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.595923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.596058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.596068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.596393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.596401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.596724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.596733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.597087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.597095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.597428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.597438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.597759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.597767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.598089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.598098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.488 [2024-07-12 01:56:37.598395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.488 [2024-07-12 01:56:37.598403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.488 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.598738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.598746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.599063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.599071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.599395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.599404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.599694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.599701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.600021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.600029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.600347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.600355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.600585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.600593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.600941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.600948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.601280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.601287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.601592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.601599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.601911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.601920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.602111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.602120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.602453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.602461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.602786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.602794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.603122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.603129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.603465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.603473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.603807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.603816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.604140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.604147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.604493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.604501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.604837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.604845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.605162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.605169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.605489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.605496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.605817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.605824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.606180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.606188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.606401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.606409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.606745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.606753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.607077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.607085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.607462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.607470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.607821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.607830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.608147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.608155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.608476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.608485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.608834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.608842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.609155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.609163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.609493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.609501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.609832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.609840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.610221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.610232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.610557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.610565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.610894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.610903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.611225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.611237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.611578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.489 [2024-07-12 01:56:37.611586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.489 qpair failed and we were unable to recover it. 00:38:11.489 [2024-07-12 01:56:37.611914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.611923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.612256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.612264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.612594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.612602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.612957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.612966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.613296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.613304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.613621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.613629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.613860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.613867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.614177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.614186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.614512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.614520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.614842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.614851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.615039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.615048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.615362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.615370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.615554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.615562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.615882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.615890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.616216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.616223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.616580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.616587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.616929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.616938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.617257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.617266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.617587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.617596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.617945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.617954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.618283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.618292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.618612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.618621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.618947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.618955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.619194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.619203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.619533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.619541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.619908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.619917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.620248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.620256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.620607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.620616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.620937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.620946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.621266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.621275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.621498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.621507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.621871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.621880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.622115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.622124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.622535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.622543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.622872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.622879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.623234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.623242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.623472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.623480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.623810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.623817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.624147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.624155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.624480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.624488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.624654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.490 [2024-07-12 01:56:37.624661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.490 qpair failed and we were unable to recover it. 00:38:11.490 [2024-07-12 01:56:37.624887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.624896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.625222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.625233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.625585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.625593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.625939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.625948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.626264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.626273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.626564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.626573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.626923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.626931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.627251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.627259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.627482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.627490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.627829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.627838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.628187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.628196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.628525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.628533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.628855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.628864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.629191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.629199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.629537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.629545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.629780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.629790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.630113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.630121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.630472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.630481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.630829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.630837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.631164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.631171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.631518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.631526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.631857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.631865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.632222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.632231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.632274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.632281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.632605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.632613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.632940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.632947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.633213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.633220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.633533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.633541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.633941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.633949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.634269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.634277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.634624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.634632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.634985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.634993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.635319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.635326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.635648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.635658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.635981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.635988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.636309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.636317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.636639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.636647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.636969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.636977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.637298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.637305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.637592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.637600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.637929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.491 [2024-07-12 01:56:37.637937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.491 qpair failed and we were unable to recover it. 00:38:11.491 [2024-07-12 01:56:37.638265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.638273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.638601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.638609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.638797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.638805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.638991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.638999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.639284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.639292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.639629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.639638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.639987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.639995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.640330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.640338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.640666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.640674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.640995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.641002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.641355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.641363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.641544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.641551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.641847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.641854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.642177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.642184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.642504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.642514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.642695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.642702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.643041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.643050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.643371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.643379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.643737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.643745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.644066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.644074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.644396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.644404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.644731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.644738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.645100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.645107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.645429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.645437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.645801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.645809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.646138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.646145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.646479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.646487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.646815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.646823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.647147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.647155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.647493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.647502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.647854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.647862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.648196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.648204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.648530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.648539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.648861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.492 [2024-07-12 01:56:37.648868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.492 qpair failed and we were unable to recover it. 00:38:11.492 [2024-07-12 01:56:37.649169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.649177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.649507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.649516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.649833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.649840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.650163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.650170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.650485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.650493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.650817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.650824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.651142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.651149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.651380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.651388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.651704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.651713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.652033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.652041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.652358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.652366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.652688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.652696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.652997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.653004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.653226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.653237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.653448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.653455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.653775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.653782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.654131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.654139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.654489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.654498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.654831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.654839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.655172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.655181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.655504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.655513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.655827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.655835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.656187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.656194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.656526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.656534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.656769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.656777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.657171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.657179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.657545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.657553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.657873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.657881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.658237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.658245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.658556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.658564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.658890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.658897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.659222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.659233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.659574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.659582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.659904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.659913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.660235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.660245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.660599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.660607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.660957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.660964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.661283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.661290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.661614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.661622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.661947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.661955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.662311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.493 [2024-07-12 01:56:37.662319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.493 qpair failed and we were unable to recover it. 00:38:11.493 [2024-07-12 01:56:37.662651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.662658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.662985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.662993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.663321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.663329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.663683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.663692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.664022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.664030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.664172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.664179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.664507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.664515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.664855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.664863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.664999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.665006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.665363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.665372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.665683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.665691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.666012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.666020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.666370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.666378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.666698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.666705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.667025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.667033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.667353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.667361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.667693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.667701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.668025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.668032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.668373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.668381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.668616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.668625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.668809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.668817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.669013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.669020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.669371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.669381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.669600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.669607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.669920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.669927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.670255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.670263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.670591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.670598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.670795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.670803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.671128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.671136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.671482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.671490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.671817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.671825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.672153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.672161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.672471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.672479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.672812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.672819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.673136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.673143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.673382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.673390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.673723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.673732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.674053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.674060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.674384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.674393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.674686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.674694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.494 [2024-07-12 01:56:37.675047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.494 [2024-07-12 01:56:37.675056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.494 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.675376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.675384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.675702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.675710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.676053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.676060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.676412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.676421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.676740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.676747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.677063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.677070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.677396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.677404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.677753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.677761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.678095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.678104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.678513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.678521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.678842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.678849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.679199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.679206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.679541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.679549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.679872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.679879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.680172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.680180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.680533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.680541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.680732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.680740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.681077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.681086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.681440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.681448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.681756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.681765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.682088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.682096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.682422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.682430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.682720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.682728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.683081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.683090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.683418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.683426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.683657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.683664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.683943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.683952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.684295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.684303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.684624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.684632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.684948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.684956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.685176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.685184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.685505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.685513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.685849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.685858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.686183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.686191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.686523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.686531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.686847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.686855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.687182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.687190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.687382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.687391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.687688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.687695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.688044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.688051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.495 [2024-07-12 01:56:37.688379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.495 [2024-07-12 01:56:37.688389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.495 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.688726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.688734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.689057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.689064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.689248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.689256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.689628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.689636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.689955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.689967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.690289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.690297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.690618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.690627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.690948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.690956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.691276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.691283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.691618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.691625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.691854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.691862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.692056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.692065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.692393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.692401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.692724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.692732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.693043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.693051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.693374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.693383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.693687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.693695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.694023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.694031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.694249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.694258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.694544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.694553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.694922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.694930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.695256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.695264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.695614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.695622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.695955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.695963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.696299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.696307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.696644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.696653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.697009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.697017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.697345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.697353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.697686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.697694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.698066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.698074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.698436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.698445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.698771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.698779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.699100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.699108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.699519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.699527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.496 [2024-07-12 01:56:37.699881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.496 [2024-07-12 01:56:37.699890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.496 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.700208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.700216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.700542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.700550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.700872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.700879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.701228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.701240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.701542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.701549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.701883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.701891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.702186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.702193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.702543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.702551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.702885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.702893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.703221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.703233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.703579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.703588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.703942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.703950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.704403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.704432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.704777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.704788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.705165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.705174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.705501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.705510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.705738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.705745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.706069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.706077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.706318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.706326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.706691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.706699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.707026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.707034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.707352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.707361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.707684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.707691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.708044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.708052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.708378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.708386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.708713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.708721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.708994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.709002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.709316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.709324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.709673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.709681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.710008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.710016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.710348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.710356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.710724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.710733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.711113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.711121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.711461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.711468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.711802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.711809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.712158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.712166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.712497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.712505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.712736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.712743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.712978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.712986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.713219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.497 [2024-07-12 01:56:37.713227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.497 qpair failed and we were unable to recover it. 00:38:11.497 [2024-07-12 01:56:37.713586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.713595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.713918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.713927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.714248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.714257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.714490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.714498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.714843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.714850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.715167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.715175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.715572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.715581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.715934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.715942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.716169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.716177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.716501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.716512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.716834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.716842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.717197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.717207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.717504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.717513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.717831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.717840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.718045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.718054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.718371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.718380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.718706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.718714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.719037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.719046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.719280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.719288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.719614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.719622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.719940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.719948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.720187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.720195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.720517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.720525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.720833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.720841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.721057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.721064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.721249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.721257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.721588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.721596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.721944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.721952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.722279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.722287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.722624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.722632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.722964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.722972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.723324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.723331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.723709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.723716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.724050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.724059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.724384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.724392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.724743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.724751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.725073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.725082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.725404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.725412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.725735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.725743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.726101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.726109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.726442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.498 [2024-07-12 01:56:37.726450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.498 qpair failed and we were unable to recover it. 00:38:11.498 [2024-07-12 01:56:37.726780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.726787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.727084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.727091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.727388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.727395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.727747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.727755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.727992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.727999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.728320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.728328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.728690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.728697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.729020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.729028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.729347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.729357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.729708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.729716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.730063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.730071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.730397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.730407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.730695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.730703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.731083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.731091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.731408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.731415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.731650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.731657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.731989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.731997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.732215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.732223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.732556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.732564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.732886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.732893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.733217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.733225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.733452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.733460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.733778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.733785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.734109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.734117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.734312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.734321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.734534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.734541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.734759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.734767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.735087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.735094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.735418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.735426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.735748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.735757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.736070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.736077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.736402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.736410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.736745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.736752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.737073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.737080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.737266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.737276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.737615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.737622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.737989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.737997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.738326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.738335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.738682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.738691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.739026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.739033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.739400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.499 [2024-07-12 01:56:37.739408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.499 qpair failed and we were unable to recover it. 00:38:11.499 [2024-07-12 01:56:37.739758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.739766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.740118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.740127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.740427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.740435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.740774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.740782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.741107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.741114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.741439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.741447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.741684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.741691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.741972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.741981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.742302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.742310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.742681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.742689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.743012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.743020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.743375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.743383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.743721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.743729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.743973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.743980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.744251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.744259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.744427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.744434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.744779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.744787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.745090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.745097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.745426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.745434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.745708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.745715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.746043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.746051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.746378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.746388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.746729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.746737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.747064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.747072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.747140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.747148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.747404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.747412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.747741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.747750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.748071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.748079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.748414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.748423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.748751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.748759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.749068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.749077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.749393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.749401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.749669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.749676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.750000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.750008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.500 qpair failed and we were unable to recover it. 00:38:11.500 [2024-07-12 01:56:37.750318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.500 [2024-07-12 01:56:37.750326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.750519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.750527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.750819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.750826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.751155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.751163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.751492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.751501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.751768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.751776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.752098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.752106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.752426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.752435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.752794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.752802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.753130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.753137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.753519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.753527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.753923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.753931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.754284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.754293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.754623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.754633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.755041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.755049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.755347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.755354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.755699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.755707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.756035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.756044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.756371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.756379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.756751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.756758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.757108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.757117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.757428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.757437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.757760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.757769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.758127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.758136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.758489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.758497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.758818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.758826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.759151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.759160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.759493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.759503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.759853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.759861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.760182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.760191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.760513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.760522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.760843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.760851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.761240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.761249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.761577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.761584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.761902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.761911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.762236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.762244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.762548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.762557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.762881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.762889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.763206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.763214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.763538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.763547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.763862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.501 [2024-07-12 01:56:37.763870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.501 qpair failed and we were unable to recover it. 00:38:11.501 [2024-07-12 01:56:37.764057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.764065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.764378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.764386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.764678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.764685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.765012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.765020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.765349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.765357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.765726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.765734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.766070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.766079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.766435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.766443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.766781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.766789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.767140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.767149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.767441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.767449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.767734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.767742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.768063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.768073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.768400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.768409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.768764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.768771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.769086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.769095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.769426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.769434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.769627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.769634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.769936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.769944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.770149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.770156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.770481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.770489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.770812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.770821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.771130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.771139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.771463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.771473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.771543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.771551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.771881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.771889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.772212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.772221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.772551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.772559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.772915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.772924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.773123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.773132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.773339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.773348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.773707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.773716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.774068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.774077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.774390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.774399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.774505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.774512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.774845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.774853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.775085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.775092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.775447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.775455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.775799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.775807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.776129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.776137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.776467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.502 [2024-07-12 01:56:37.776475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.502 qpair failed and we were unable to recover it. 00:38:11.502 [2024-07-12 01:56:37.776782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.776791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.777130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.777139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.777376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.777384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.777794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.777803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.778117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.778125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.778322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.778330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.778701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.778710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.779049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.779057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.779255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.779264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.779574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.779583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.779778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.779786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.779995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.780005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.780313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.780322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.780644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.780652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.780925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.780934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.781302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.781311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.781610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.781617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.781943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.781951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.782276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.782285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.782518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.782526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.782745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.782753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.782955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.782964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.783177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.783186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.783349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.783357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.783587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.783594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.783954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.783962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.784193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.784202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.784550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.784558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.784963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.784971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.785348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.785356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.785543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.785551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.785858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.785866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.786226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.786238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.786559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.786567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.786901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.786909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.787237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.787245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.787599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.787606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.787935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.787943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.788271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.788279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.788606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.788613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.503 [2024-07-12 01:56:37.788847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.503 [2024-07-12 01:56:37.788854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.503 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.789180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.789188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.789493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.789501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.789717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.789725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.790035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.790042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.790375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.790382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.790714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.790722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.791049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.791056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.791357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.791366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.791691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.791699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.791936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.791944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.792308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.792318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.792619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.792626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.792965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.792974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.793306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.793314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.793653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.793661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.794030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.794037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.794236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.794244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.794549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.794558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.794741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.794748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.795071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.795079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.795410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.795417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.795743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.795752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.796039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.796047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.796349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.796357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.796714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.796723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.796921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.796928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.797217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.797225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.797564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.797572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.797896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.797903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.798110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.798118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.798427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.798436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.798622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.798630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.798986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.798993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.799324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.799332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.504 [2024-07-12 01:56:37.799672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.504 [2024-07-12 01:56:37.799680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.504 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.800046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.800055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.800387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.800396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.800736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.800744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.800966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.800975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.801316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.801324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.801671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.801678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.801993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.802001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.802339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.802346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.802598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.802606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.802805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.802812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.803162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.803171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.803507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.803515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.803877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.803885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.804212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.804219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.804432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.804441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.804775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.804787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.805146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.805154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.805477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.805486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.805719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.805727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.806072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.806081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.806289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.806297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.806661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.806669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.806911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.806919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.807258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.807267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.807612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.807621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.807913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.807921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.807990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.807998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.808336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.808344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.808659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.808667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.808985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.793 [2024-07-12 01:56:37.808993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.793 qpair failed and we were unable to recover it. 00:38:11.793 [2024-07-12 01:56:37.809330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.809339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.809658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.809667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.809998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.810006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.810355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.810363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.810701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.810709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.811042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.811050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.811389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.811397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.811596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.811604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.811946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.811955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.812280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.812288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.812527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.812534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.812871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.812879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.813202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.813211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.813559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.813567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.813898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.813906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.814139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.814147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.814388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.814396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.814750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.814758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.815102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.815110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.815469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.815477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.794 [2024-07-12 01:56:37.815830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.794 [2024-07-12 01:56:37.815838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.794 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.816167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.816174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.816523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.816532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.816834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.816841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.817169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.817177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.817503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.817516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.817846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.817854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.818169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.818178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.818414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.818423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.818757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.818765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.819091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.819100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.819426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.819435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.819789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.819798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.820119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.820128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.820421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.820429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.820739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.820748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.821071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.821079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.821278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.821287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.821621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.821630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.821934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.795 [2024-07-12 01:56:37.821942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.795 qpair failed and we were unable to recover it. 00:38:11.795 [2024-07-12 01:56:37.822273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.822281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.822657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.822666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.822992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.823000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.823326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.823335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.823498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.823506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.823840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.823848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.824032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.824040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.824368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.824375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.824583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.824590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.824920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.824927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.825250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.825258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.825462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.825470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.825793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.825802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.826093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.826101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.826442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.826450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.826804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.826812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.827142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.827150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.827338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.827346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.827681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.827688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.827812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.827820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.828162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.828170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.828492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.796 [2024-07-12 01:56:37.828500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.796 qpair failed and we were unable to recover it. 00:38:11.796 [2024-07-12 01:56:37.828834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.828841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.829194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.829203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.829481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.829489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.829807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.829818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.830144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.830152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.830355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.830364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.830577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.830586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.830913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.830921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.831158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.831486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.831494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.831688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.831697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.832020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.832029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.832241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.832251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.832573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.832581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.832819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.832827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.833182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.833191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.833420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.833428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.833761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.833769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.833954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.833963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.834189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.834197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.834494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.834502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.834833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.834842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.835196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.835205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.797 [2024-07-12 01:56:37.835433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.797 [2024-07-12 01:56:37.835442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.797 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.835702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.835711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.836053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.836061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.836297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.836305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.836612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.836619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.836953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.836961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.837161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.837170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.837503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.837512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.837839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.837847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.838179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.838187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.838501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.838510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.838845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.838854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.839193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.839202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.839545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.839554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.839865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.839874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.840197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.840205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.840539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.840547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.840876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.840885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.841196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.841205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.841537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.841546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.841866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.841874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.798 qpair failed and we were unable to recover it. 00:38:11.798 [2024-07-12 01:56:37.842198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.798 [2024-07-12 01:56:37.842207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.842540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.842549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.842869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.842878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.843224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.843235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.843542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.843551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.843856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.843866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.844193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.844201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.844529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.844539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.844871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.844879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.845245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.845254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.845459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.845466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.845678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.845686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.846035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.846042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.846399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.846407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.846733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.846744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.847073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.847081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.847408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.847417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.847735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.847744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.848066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.848074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.799 [2024-07-12 01:56:37.848391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.799 [2024-07-12 01:56:37.848400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.799 qpair failed and we were unable to recover it. 00:38:11.803 [2024-07-12 01:56:37.848725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.848733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.849091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.849100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.849312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.849321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.849546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.849554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.849906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.849914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.850263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.850274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.850618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.850628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.850946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.850955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.851176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.851185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.851504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.851513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.851835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.851844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.852154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.852163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.804 [2024-07-12 01:56:37.852493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.804 [2024-07-12 01:56:37.852501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.804 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.853189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.853206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.853549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.853559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.853786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.853795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.854125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.854133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.854469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.854478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.854805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.854814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.855238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.855248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.855545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.855552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.855866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.855876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.856199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.856207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.856423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.856431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.856628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.856636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.856855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.856863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.856981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.856989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.857201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.857208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.857511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.857519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.857886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.857894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.805 qpair failed and we were unable to recover it. 00:38:11.805 [2024-07-12 01:56:37.858264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.805 [2024-07-12 01:56:37.858273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.858623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.858631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.858961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.858969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.859208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.859216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.859603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.859612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.859967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.859975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.860327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.860335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.860629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.860643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.860983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.860991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.861222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.861233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.861560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.861568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.861924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.861931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.862320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.862329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.862667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.862675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.863004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.863011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.863330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.863338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.863627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.863637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.863959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.863967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.864290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.864299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.864639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.864647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.865005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.865014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.865252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.865261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.865567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.865575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.806 [2024-07-12 01:56:37.865877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.806 [2024-07-12 01:56:37.865885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.806 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.866217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.866225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.866588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.866596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.866792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.866800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.867128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.867135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.867478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.867487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.867854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.867862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.868195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.868203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.868557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.868566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.868888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.868895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.869182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.869190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.869423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.869431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.869770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.869777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.870102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.870110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.870451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.870459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.870739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.870747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.871085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.871093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.871288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.871296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.871606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.871613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.871967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.871974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.872329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.872338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.807 [2024-07-12 01:56:37.872643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.807 [2024-07-12 01:56:37.872651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.807 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.872984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.872992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.873228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.873239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.873457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.873465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.873685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.873693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.873968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.873975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.874365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.874373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.874680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.874687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.875040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.875048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.875389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.875397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.875717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.875724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.876094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.876103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.876450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.876463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.876629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.876637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.876924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.876933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.877272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.877281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.877591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.877598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.877925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.877933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.878268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.878275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.878591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.878600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.878933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.878941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.879279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.808 [2024-07-12 01:56:37.879287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.808 qpair failed and we were unable to recover it. 00:38:11.808 [2024-07-12 01:56:37.879584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.879593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.879896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.879904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.880194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.880202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.880527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.880535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.880860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.880868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.881224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.881236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.881413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.881422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.881678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.881686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.882014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.882023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.882318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.882326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.882678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.882687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.882917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.882925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.883141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.883148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.883450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.883458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.883844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.883852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.884185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.884193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.884511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.884520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.884883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.884891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.885149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.885156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.885471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.885479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.885673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.885681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.885998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.886007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.886416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.809 [2024-07-12 01:56:37.886424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.809 qpair failed and we were unable to recover it. 00:38:11.809 [2024-07-12 01:56:37.886748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.886755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.886945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.886953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.887262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.887271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.887607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.887615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.887936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.887944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.888275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.888283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.888592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.888600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.888934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.888943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.889265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.889273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.889575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.889583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.889933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.889941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.890251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.890259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.890462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.890470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.890809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.890817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.891168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.891176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.891503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.891512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.891839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.891846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.892168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.892176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.892514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.892523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.892666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.892674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.892993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.893002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.810 [2024-07-12 01:56:37.893324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.810 [2024-07-12 01:56:37.893332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.810 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.893701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.893708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.894098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.894106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.894290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.894298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.894517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.894525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.894860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.894868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.895176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.895184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.895425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.895433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.895622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.895630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.895969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.895979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.896296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.896304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.896662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.896670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.896996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.897003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.897239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.897248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.897501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.897508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.897726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.897733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.898084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.898092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.898424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.898433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.898798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.898805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.899142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.899151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.899319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.899327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.899632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.811 [2024-07-12 01:56:37.899641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.811 qpair failed and we were unable to recover it. 00:38:11.811 [2024-07-12 01:56:37.899950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.899959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.900312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.900320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.900610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.900618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.900887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.900894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.901113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.901122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.901516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.901524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.901847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.901855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.902066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.902074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.902424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.902433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.902781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.902790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.903117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.903124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.903488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.903496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.903805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.903813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.904114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.904121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.904352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.904359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.904606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.904614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.904947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.904956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.905306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.905313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.905527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.905535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.812 qpair failed and we were unable to recover it. 00:38:11.812 [2024-07-12 01:56:37.905876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.812 [2024-07-12 01:56:37.905884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.906238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.906246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.906416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.906424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.906764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.906772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.906996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.907003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.907324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.907333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.907674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.907682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.908000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.908009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.908354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.908362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.908664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.908672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.908997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.909005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.909190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.813 [2024-07-12 01:56:37.909199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.813 qpair failed and we were unable to recover it. 00:38:11.813 [2024-07-12 01:56:37.909503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.909512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.909862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.909871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.910193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.910201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.910520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.910528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.910724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.910733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.911052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.911060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.911399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.911408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.911749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.911757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.912095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.912104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.912325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.912333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.912563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.912570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.912839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.912847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.913181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.913189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.913597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.913607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.913936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.913944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.914265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.914274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.914504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.914512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.914825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.914834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.915156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.915164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.915534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.915542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.915768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.915776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.814 qpair failed and we were unable to recover it. 00:38:11.814 [2024-07-12 01:56:37.915995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.814 [2024-07-12 01:56:37.916002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.916332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.916340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.916667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.916674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.916995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.917004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.917318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.917326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.917614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.917623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.917970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.917977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.918202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.918209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.918548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.918556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.918920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.918928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.919195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.919204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.919593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.919602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.919917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.919926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.920264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.920272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.920608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.920616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.920942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.920950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.921185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.921193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.921496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.921504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.921739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.921746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.922070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.922078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.922408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.922416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.815 [2024-07-12 01:56:37.922769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.815 [2024-07-12 01:56:37.922777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.815 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.923109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.923118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.923487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.923495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.923890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.923898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.924024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.924032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.924353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.924362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.924705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.924714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.924821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.924829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.925109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.925118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.925450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.925457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.925786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.925795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.926148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.926158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.926491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.926501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.926825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.926834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.927170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.927178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.927553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.927562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.927934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.927942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.928243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.928251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.928460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.928467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.928757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.928765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.928951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.928959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.929283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.816 [2024-07-12 01:56:37.929290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.816 qpair failed and we were unable to recover it. 00:38:11.816 [2024-07-12 01:56:37.929626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.929634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.929987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.929995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.930326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.930335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.930628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.930635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.931004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.931013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.931339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.931349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.931649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.931657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.931982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.931990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.932326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.817 [2024-07-12 01:56:37.932334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.817 qpair failed and we were unable to recover it. 00:38:11.817 [2024-07-12 01:56:37.932704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.932712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.933046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.933054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.933389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.933397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.933741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.933749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.934059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.934068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.934412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.934420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.934640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.934648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.934936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.934944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.935295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.935304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.935476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.935484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.935805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.935813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.936135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.936143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.936481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.936489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.936815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.936824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.937191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.937199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.937497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.937506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.937739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.937747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.938050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.938058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.938387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.938395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.938701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.938709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.939014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.939025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.939324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.939333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.939632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.939640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.939875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.939883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.940217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.818 [2024-07-12 01:56:37.940224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.818 qpair failed and we were unable to recover it. 00:38:11.818 [2024-07-12 01:56:37.940479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.940487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.940821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.940828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.941138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.941146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.941472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.941479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.941674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.941682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.942014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.942022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.942356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.942365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.942734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.942742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.943081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.943089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.943319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.943326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.943521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.943529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.943849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.943857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.944171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.944180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.944436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.944445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.944799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.944806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.945032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.945039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.945364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.945372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.945719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.945727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.946062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.946071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.946395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.946402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.946732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.946739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.947065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.947072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.947416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.947424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.947700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.947708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.948025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.948032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.819 qpair failed and we were unable to recover it. 00:38:11.819 [2024-07-12 01:56:37.948369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.819 [2024-07-12 01:56:37.948378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.948691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.948699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.949059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.949066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.949401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.949409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.949659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.949667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.949867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.949875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.950257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.950265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.950582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.950591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.950792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.950800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.951149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.951158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.951501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.951511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.951837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.951846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.952210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.952218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.952556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.952563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.952898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.952904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.953213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.953219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.953541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.953547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.953888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.953894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.954234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.954241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.954559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.954565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.954905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.954911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.955128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.955134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.955478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.955486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.955756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.955763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.956082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.956089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.820 qpair failed and we were unable to recover it. 00:38:11.820 [2024-07-12 01:56:37.956427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.820 [2024-07-12 01:56:37.956435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.956750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.956758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.957048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.957056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.957362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.957372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.957683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.957692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.957863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.957872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.958078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.958087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.958395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.958404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.958747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.821 [2024-07-12 01:56:37.958755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.821 qpair failed and we were unable to recover it. 00:38:11.821 [2024-07-12 01:56:37.959043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.822 [2024-07-12 01:56:37.959052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.822 qpair failed and we were unable to recover it. 00:38:11.822 [2024-07-12 01:56:37.959385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.822 [2024-07-12 01:56:37.959395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.822 qpair failed and we were unable to recover it. 00:38:11.822 [2024-07-12 01:56:37.959710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.823 [2024-07-12 01:56:37.959719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.960088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.960097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.960429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.960438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.960780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.960788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.961123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.961132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.961460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.961468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.961836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.961844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.962163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.962171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.962418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.962427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.962715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.962724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.963084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.963092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.963369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.963378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.963731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.963739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.963990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.963998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.964337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.964348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.964589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.964597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.964889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.964897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.965067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.965076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.965423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.965432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.965715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.965723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.966071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.966080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.966408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.966417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.966663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.966671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.967028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.967036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.967365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.967374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.967733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.967741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.967989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.967998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.968376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.968384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.968789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.968798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.824 [2024-07-12 01:56:37.969148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.824 [2024-07-12 01:56:37.969157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.824 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.969490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.969498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.969816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.969824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.970156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.970165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.970508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.970517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.970830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.970838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.971182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.971191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.971600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.971609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.971949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.971957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.972315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.972323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.972699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.972707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.973035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.973044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.973424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.973432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.973786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.973794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.974125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.974132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.974380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.974388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.974517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.974524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.974840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.974849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.975203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.975210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.975556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.975564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.975780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.975787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.976048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.976055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.976279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.976287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.976636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.976644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.976991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.977000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.977344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.977356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.977676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.977684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.978015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.978024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.978378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.978386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.978616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.978624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.978960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.978967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.979306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.979314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.979585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.979592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.979786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.979794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.980095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.980103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.980428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.980437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.980775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.980783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.980939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.980947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.981285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.981293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.981605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.981613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.825 qpair failed and we were unable to recover it. 00:38:11.825 [2024-07-12 01:56:37.981925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.825 [2024-07-12 01:56:37.981933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.982250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.982258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.982469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.982476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.982820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.982829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.983083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.983091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.983311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.983319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.983558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.983567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.983920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.983928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.984126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.984133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.984469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.984477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.984768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.984776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.984998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.985005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.985105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.985112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.985350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.985359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.985718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.985725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.986091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.986098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.986431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.986440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.986757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.986765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.987118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.987126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.987486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.987494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.987727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.987735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.988065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.988073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.988400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.988408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.988786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.988794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.988991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.988998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.989121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.989130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.989370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.989378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.989574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.989582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.989792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.989799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.990020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.990029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.990125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.990132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.990440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.990448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.990790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.990798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.991224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.991237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.991479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.991487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.991649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.991658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.991969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.991977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.826 [2024-07-12 01:56:37.992301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.826 [2024-07-12 01:56:37.992309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.826 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.992680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.992688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.993037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.993045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.993266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.993274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.993482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.993490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.993801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.993809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.994149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.994157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.994561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.994569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.994820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.994828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.995210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.995218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.995401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.995409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.995762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.995769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.996001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.996009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.996342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.996351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.996694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.996703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.997025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.997034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.997233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.997241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.997462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.997470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.997756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.997763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.998099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.998107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.998344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.998352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.998577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.998584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.998780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.998787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.999126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.999134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.999476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.999485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:37.999821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:37.999830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.000165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.000173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.000339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.000347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.000633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.000640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.000881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.000889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.001172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.001179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.001477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.001485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.001813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.001823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.002047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.002055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.002428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.002438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.002786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.002794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.003118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.003127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.003524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.003532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.003862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.003870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.004167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.004176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.004514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.004523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.004823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.004832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.005176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.005185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.005521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.005530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.005898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.005907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.006243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.006252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.006573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.006583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.006945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.006954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.007380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.007388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.007593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.007601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.007899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.007906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.008090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.008098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.008483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.008492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.008801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.008808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.009130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.009139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.009427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.009437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.009772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.009780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.010090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.010098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.827 qpair failed and we were unable to recover it. 00:38:11.827 [2024-07-12 01:56:38.010348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.827 [2024-07-12 01:56:38.010356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.010617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.010624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.010935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.010944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.011221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.011233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.011567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.011575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.011749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.011757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.012107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.012116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.012455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.012463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.012785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.012794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.013116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.013126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.013539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.013548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.014337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.014354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.014540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.014549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.014856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.014864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.015248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.015256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.015568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.015577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.015878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.015887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.016079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.016088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.016397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.016406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.016787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.016794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.017121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.017129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.017555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.017564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.017888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.017907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.018264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.018273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.018631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.018639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.018968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.018977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.019311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.019319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.019657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.019665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.019977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.019987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.020291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.020301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.020609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.020618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.020847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.020856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.021185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.021193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.021421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.021429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.021755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.021764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.022107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.022116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.022458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.022467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.022821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.022831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.023031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.023040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.023371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.023379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.023742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.023752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.024079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.828 [2024-07-12 01:56:38.024089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.828 qpair failed and we were unable to recover it. 00:38:11.828 [2024-07-12 01:56:38.024327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.024337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.024668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.024676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.025007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.025016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.025370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.025380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.025710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.025718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.026032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.026040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.026253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.026262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.026605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.026613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.026807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.026815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.027137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.027147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.027470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.027479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.027887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.027895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.028219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.028227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.028605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.028613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.028941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.028950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.029284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.029292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.029639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.029647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.029964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.029973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.030303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.030312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.030614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.030623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.030949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.030958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.031315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.031324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.031677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.031686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.031999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.032008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.032364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.032373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.032730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.032739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.033074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.033082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.033406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.033414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.033747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.033755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.034103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.034112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.034457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.034466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.034795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.034804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.035138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.035147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.035389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.035398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.035730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.035738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.036063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.036073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.036398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.036406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.036586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.036594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.036869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.036879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.037338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.037347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.037690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.037699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.038054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.038062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.038393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.038401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.038725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.038734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.039063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.039072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.039400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.039409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.039758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.039767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.829 [2024-07-12 01:56:38.040099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.829 [2024-07-12 01:56:38.040107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.829 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.040452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.040460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.040814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.040823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.041159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.041169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.041486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.041495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.041691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.041699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.042021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.042030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.042368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.042376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.042741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.042750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.043106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.043114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.043514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.043522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.043856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.043864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.044195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.044203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.044361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.044369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.044568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.044575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.044899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.044907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.045246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.045255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.045580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.045587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.045915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.045922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.046268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.046276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.046506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.046514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.046799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.046807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.047111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.047119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.047480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.047488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.047813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.047822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.048154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.048162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.048537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.048545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.048762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.048769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.049057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.049066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.049364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.049371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.049709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.049718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.050034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.050043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.050375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.050383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.050734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.050742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.051075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.051083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.051370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.051378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.051763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.830 [2024-07-12 01:56:38.051771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.830 qpair failed and we were unable to recover it. 00:38:11.830 [2024-07-12 01:56:38.052102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.052111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.052471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.052479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.052818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.052826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.053161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.053169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.053501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.053509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.053866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.053873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.054205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.054213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.054547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.054555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.054895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.054903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.055263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.055273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.055630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.055638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.056080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.056087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.056319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.056326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.056710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.056717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.056946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.056953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.057165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.057173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.057378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.057386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.057689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.057697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.057991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.057999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.058351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.058359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.058728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.058735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.059091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.059099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.059330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.059338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.059684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.059692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.059878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.059886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.060256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.060265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.060614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.060623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.060998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.061007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.061326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.061334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.061635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.061644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.061970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.061978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.062212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.062221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.062558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.062566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.062888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.062898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.063252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.063261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.063620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.063629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.063959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.063968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.064285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.064294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.064635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.064643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.064865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.064874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.065197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.065206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.065508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.065517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.065700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.065708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.066009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.066017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.066219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.066227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.066539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.066547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.066844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.066852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.067181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.067189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.067422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.067430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.067773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.831 [2024-07-12 01:56:38.067780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.831 qpair failed and we were unable to recover it. 00:38:11.831 [2024-07-12 01:56:38.068107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.068115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.068451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.068460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.068798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.068806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.069124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.069133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.069479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.069487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.069839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.069847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.070153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.070161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.070536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.070544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.070875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.070883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.071154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.071162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.071369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.071377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.071690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.071698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.072032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.072040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.072389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.072396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.072742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.072750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.073104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.073112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.073442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.073451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.073777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.073786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.074104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.074112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.074466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.074474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.074744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.074752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.075048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.075057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.075291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.075299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.075667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.075675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.075949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.075956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.076337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.076345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.076561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.076568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.076885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.076893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.077223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.077234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.077591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.077599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.077925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.077934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.078263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.078273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.078597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.078605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.078934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.078942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.079275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.079283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.079587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.079595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.079918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.079925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.080164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.080172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.080409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.080418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.080760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.080768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.081092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.081099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.081437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.081445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.081720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.081728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.081798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.081804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.082063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.082071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.082407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.082415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.082626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.082633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.082801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.082810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.083026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.083034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.083328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.083344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.083633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.083641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.083841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.083849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.084059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.084068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.084413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.084421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.084754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.084761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.084959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.084966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.085291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.085299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.085644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.085652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.085978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.085986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.086299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.086308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.086715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.086723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.086908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.086917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.087129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.087138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.087347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.087357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.087580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.087588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.087913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.087922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.088278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.088286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.088514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.088522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.088877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.088885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.089154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.089162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.089473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.089481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.832 [2024-07-12 01:56:38.089808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.832 [2024-07-12 01:56:38.089817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.832 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.089941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.089948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.090272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.090280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.090544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.090552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.090865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.090872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.091199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.091208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.091515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.091523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.091852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.091861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.092201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.092210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.092440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.092447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.092784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.092793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.093155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.093164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.093531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.093540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.093849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.093857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.094055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.094063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.094285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.094293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.094693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.094704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.095103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.095119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.095292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.095301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.095692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.095700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.095919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.095927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.096159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.096168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.096276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.096283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.096839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.096857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.097090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.097097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.097432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.097439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.097748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.097754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.098121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.098128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.098361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.098369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.098732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.098739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.099074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.099083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.099324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.099331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.099590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.099600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.099908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.099915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.100241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.100249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.100607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.100615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.100981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.100990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.101291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.101298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.101638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.101646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.101877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.101884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.102003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.102010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.102303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.102311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.102615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.102623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.102952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.102961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.103337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.103344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.103673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.103682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.104012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.104019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.104370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.104378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.104733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.104740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.105077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.105084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.105428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.105436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.105632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.105639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.105937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.105944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.106320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.106328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.106565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.106572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.106893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.106900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.107236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.107244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.107597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.107605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.107948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.107955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.108310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.108318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.108669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.108676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.109010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.109018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.109457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.109465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.109784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.109793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.110015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.110023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.110254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.110261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.110643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.110650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.110844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.110851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.111137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.111144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.111446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.111454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.111792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.111802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.112156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.112163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.112397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.112405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.112733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.112741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.113071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.113078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.833 [2024-07-12 01:56:38.113393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.833 [2024-07-12 01:56:38.113401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.833 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.113753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.113760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.114089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.114097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.114431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.114438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.114761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.114768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.115002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.115009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.115217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.115224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.115552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.115560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.115909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.115916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.116260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.116268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.116565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.116573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.116902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.116910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.117236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.117244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.117796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.117812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.118114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.118121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.118551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.118559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.118913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.118920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.119286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.119293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.119728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.119736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.120061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.120069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.120397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.120405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.120732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.120739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.120938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.120946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.121276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.121284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.121571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.121579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.121930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.121937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.122263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.122271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.122578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.122586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.122939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.122946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:11.834 [2024-07-12 01:56:38.123296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:11.834 [2024-07-12 01:56:38.123303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:11.834 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.123624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.123632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.123959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.123967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.124333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.124340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.124647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.124654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.124989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.124995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.125322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.125332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.125661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.125667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.126054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.126060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.126387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.126394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.126709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.126716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.127031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.127039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.127389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.127396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.127733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.127741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.128071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.128078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.128279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.128285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.128657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.128663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.128973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.128980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.129350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.129356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.129655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.129662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.130019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.130025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.130105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.130111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.130406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.130413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.130628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.130634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.130957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.130965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.131163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.131170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.131398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.131405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.131600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.131607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.131907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.131914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.132101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.132107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.132438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.132445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.132781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.109 [2024-07-12 01:56:38.132787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.109 qpair failed and we were unable to recover it. 00:38:12.109 [2024-07-12 01:56:38.133112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.133118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.133447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.133454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.133705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.133713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.134044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.134050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.134246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.134253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.134503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.134510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.134804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.134811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.134977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.134991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.135175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.135181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.135489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.135496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.135714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.135721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.136097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.136105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.136426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.136433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.136757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.136764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.137022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.137030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.137370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.137377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.137702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.137710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.138045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.138052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.138237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.138244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.138545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.138552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.138862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.138869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.139126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.139133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.139469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.139475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.139831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.139838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.140203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.140210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.140512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.140519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.140874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.140881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.141216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.141223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.141508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.141516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.141750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.141757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.142085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.142092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.142423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.142429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.142736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.142743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.143087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.143093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.143412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.143420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.143767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.143773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.144089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.144096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.144314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.144320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.144689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.144696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.144954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.144961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.145310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.110 [2024-07-12 01:56:38.145317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.110 qpair failed and we were unable to recover it. 00:38:12.110 [2024-07-12 01:56:38.145651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.145658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.146009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.146015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.146367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.146374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.146707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.146721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.147053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.147059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.147287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.147294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.147521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.147528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.147908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.147915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.148241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.148247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.148554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.148561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.148906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.148913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.149265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.149273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.149602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.149609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.149996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.150004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.150195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.150203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.150496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.150503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.150836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.150843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.151203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.151210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.151530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.151538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.151891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.151897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.152233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.152240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.152433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.152440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.152782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.152788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.153133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.153139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.153375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.153382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.153715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.153721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.154028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.154036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.154388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.154395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.154719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.154733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.155056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.155062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.155382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.155390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.155696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.155702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.155883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.155890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.156218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.156224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.156598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.156605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.156946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.156952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.157284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.157291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.157471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.157478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.157906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.157913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.158226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.158236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.111 qpair failed and we were unable to recover it. 00:38:12.111 [2024-07-12 01:56:38.158585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.111 [2024-07-12 01:56:38.158592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.158939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.158946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.159273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.159279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.159622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.159629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.159962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.159968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.160146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.160153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.160366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.160373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.160704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.160711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.161014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.161021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.161348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.161355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.161678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.161685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.162040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.162047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.162364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.162371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.162708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.162715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.163038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.163046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.163374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.163381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.163702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.163709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.164034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.164041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.164372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.164379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.164673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.164679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.165014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.165021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.165334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.165341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.165672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.165679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.165993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.166000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.166343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.166350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.166750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.166757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.166957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.166964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.167259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.167266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.167673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.167679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.168011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.168017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.168404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.168411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.168745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.168752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.169080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.169087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.169455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.169462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.169780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.169787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.169991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.169998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.170421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.170428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.170621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.170628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.170913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.170920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.171237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.171245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.171588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.112 [2024-07-12 01:56:38.171595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.112 qpair failed and we were unable to recover it. 00:38:12.112 [2024-07-12 01:56:38.171908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.171915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.172245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.172252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.172611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.172617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.172857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.172863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.173190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.173197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.173511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.173517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.173863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.173870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.174222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.174233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.174619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.174626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.174940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.174947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.175306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.175312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.175642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.175649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.175980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.175987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.176301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.176309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.176650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.176657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.176894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.176901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.177231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.177238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.177567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.177574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.177927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.177935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.178263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.178270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.178600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.178607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.178926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.178934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.179296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.179303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.179689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.179696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.180026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.180033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.180363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.180370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.180739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.180747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.180980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.180987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.181279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.181286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.181596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.181603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.181953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.181959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.182277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.182284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.182621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.113 [2024-07-12 01:56:38.182627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.113 qpair failed and we were unable to recover it. 00:38:12.113 [2024-07-12 01:56:38.182950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.182959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.183320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.183327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.183513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.183520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.183653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.183659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.183950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.183958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.184285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.184292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.184616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.184623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.184954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.184962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.185292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.185299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.185633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.185640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.185994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.186000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.186238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.186245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.186553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.186559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.186885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.186893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.187261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.187268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.187580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.187587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.187917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.187923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.188239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.188247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.188551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.188557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.188736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.188743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.189069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.189076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.189471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.189478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.189818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.189825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.190153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.190159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.190381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.190388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.190721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.190728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.191044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.191052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.191382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.191389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.191730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.191737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.192069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.192076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.192392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.192400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.192741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.192747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.193065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.193073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.193268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.193275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.193627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.193633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.193961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.193968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.194282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.194289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.194621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.194627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.194749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.194755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.195152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.195158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.114 [2024-07-12 01:56:38.195520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.114 [2024-07-12 01:56:38.195526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.114 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.195876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.195882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.196239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.196246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.196474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.196481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.196693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.196699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.197081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.197088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.197400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.197410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.197735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.197742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.198054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.198061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.198392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.198398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.198721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.198728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.198951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.198957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.199310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.199317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.199637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.199644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.199949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.199955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.200346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.200353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.200644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.200651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.200934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.200941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.201250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.201258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.201462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.201469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.201877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.201884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.202257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.202264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.202469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.202476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.202805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.202811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.203092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.203106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.203422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.203429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.203742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.203749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.204075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.204081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.204405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.204413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.204564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.204572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.204812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.204819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.205058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.205065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.205252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.205259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.205561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.205568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.205903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.205909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.206203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.206210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.206536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.206543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.206808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.206814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.207125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.207132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.207308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.207316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.207555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.207561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.115 [2024-07-12 01:56:38.207837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.115 [2024-07-12 01:56:38.207844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.115 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.208007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.208015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.208354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.208361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.208647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.208654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.208841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.208848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.209071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.209079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.209404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.209411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.209814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.209821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.210000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.210007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.210303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.210310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.210602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.210609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.210888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.210895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.211223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.211233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.211364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.211371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.211788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.211795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.212104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.212113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.213130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.213148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.213518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.213527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.213876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.213883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.214216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.214223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.214610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.214617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.214953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.214960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.215198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.215206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.215540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.215547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.215698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.215705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.216018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.216024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.216324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.216332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.216658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.216665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.216930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.216936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.217299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.217307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.217620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.217627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.217962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.217969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.218271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.218278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.218644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.218651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.218981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.218987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.219319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.219327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.219664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.219671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.219990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.219998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.220319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.220326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.220690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.220697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.221026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.221033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.116 [2024-07-12 01:56:38.221334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.116 [2024-07-12 01:56:38.221340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.116 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.221653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.221661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.221989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.221997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.222328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.222335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.222651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.222661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.222984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.222996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.223320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.223331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.223669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.223678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.224007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.224015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.224365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.224374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.224598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.224605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.224836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.224843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.225152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.225158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.225530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.225537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.225868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.225874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.226234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.226242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.226433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.226440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.226768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.226774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.227082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.227089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.227422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.227429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.227786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.227793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.228131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.228138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.228407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.228414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.228720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.228727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.229050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.229056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.229355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.229362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.229699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.229705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.230027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.230034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.230358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.230365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.230779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.230786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.231116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.231123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.231912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.231927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.232238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.232246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.232951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.232965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.233266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.233275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.117 qpair failed and we were unable to recover it. 00:38:12.117 [2024-07-12 01:56:38.233976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.117 [2024-07-12 01:56:38.233990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.234297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.234306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.234637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.234645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.234950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.234968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.235316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.235331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.235655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.235663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.236032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.236039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.236380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.236387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.236759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.236766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.237088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.237096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.237426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.237434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.237789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.237796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.238152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.238159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.238618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.238625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.238803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.238810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.239131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.239138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.239435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.239442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.239780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.239787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.240149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.240155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.240490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.240497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.240703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.240709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.240950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.240956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.241118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.241125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.241422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.241429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.241782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.241790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.242121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.242128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.242466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.242475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.242705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.242712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.242856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.242863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.243150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.243158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.243481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.243488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.243855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.243862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.244209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.244216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.244577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.244584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.245010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.245017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.245364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.245372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.245776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.245784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.246101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.246108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.246423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.246431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.246661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.246669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.246886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.246893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.118 qpair failed and we were unable to recover it. 00:38:12.118 [2024-07-12 01:56:38.247089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.118 [2024-07-12 01:56:38.247096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.247308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.247316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.247657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.247664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.248017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.248025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.248352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.248360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.248572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.248580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.248868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.248875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.249239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.249247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.249456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.249466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.249760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.249767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.249987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.249994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.250284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.250291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.250645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.250653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.251008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.251017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.251285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.251294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.251492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.251500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.251830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.251838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.252210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.252218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.252564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.252572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.252792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.252799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.253118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.253126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.253474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.253481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.253808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.253817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.253950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.253956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.254284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.254291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.254497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.254505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.254866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.254874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.255188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.255195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.255541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.255549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.255917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.255925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.256220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.256228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.256540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.256548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.256890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.256898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.257263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.257270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.257587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.257595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.257973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.257981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.258212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.258219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.258532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.258539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.258725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.258733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.259013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.259019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.259319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.119 [2024-07-12 01:56:38.259326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.119 qpair failed and we were unable to recover it. 00:38:12.119 [2024-07-12 01:56:38.259606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.259613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.259930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.259937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.260257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.260264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.260590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.260597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.261001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.261008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.261366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.261373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.261740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.261747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.261978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.261987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.262317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.262324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.262649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.262658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.262952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.262959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.263279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.263286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.263505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.263512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.263841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.263848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.264178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.264184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.264501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.264508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.264840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.264848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.264987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.264994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.265297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.265304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.265636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.265643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.265973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.265980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.266293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.266299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.266628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.266634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.266965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.266973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.267309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.267317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.267655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.267662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.267878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.267885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.268237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.268245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.268579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.268586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.268916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.268924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.269156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.269163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.269504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.269511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.269735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.269742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.270103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.270110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.270481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.270489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.270817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.270824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.271070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.271077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.271385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.271393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.271758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.271766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.272093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.272101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.120 [2024-07-12 01:56:38.272428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.120 [2024-07-12 01:56:38.272436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.120 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.272762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.272770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.272971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.272978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.273280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.273287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.273617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.273623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.273820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.273826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.274031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.274038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.274335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.274344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.274686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.274694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.274932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.274938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.275306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.275313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.275628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.275635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.275959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.275966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.276324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.276332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.276556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.276563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.276974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.276980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.277181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.277187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.277611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.277619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.277850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.277858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.278193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.278201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.278622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.278630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.278977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.278984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.279301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.279309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.279607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.279614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.279811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.279818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.280060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.280068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.280409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.280416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.280664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.280671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.280903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.280910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.281236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.281244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.281578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.281585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.281950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.281957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.282261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.282269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.282508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.282515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.282749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.282756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.282959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.282966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.283159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.283166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.283354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.283362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.283595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.283603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.283940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.283946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.284268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.121 [2024-07-12 01:56:38.284275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.121 qpair failed and we were unable to recover it. 00:38:12.121 [2024-07-12 01:56:38.284381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.284387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.284705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.284712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.285032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.285040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.285396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.285403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.285730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.285736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.286047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.286053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.286263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.286271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.286481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.286488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.286704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.286710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.287040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.287048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.287368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.287376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.287704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.287711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.288069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.288076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.288341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.288349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.288744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.288752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.289084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.289091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.289431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.289438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.289644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.289651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.290019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.290025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.290363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.290369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.290483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.290490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.290799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.290806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.291131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.291137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.291334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.291340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.291718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.291725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.292058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.292065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.292422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.292428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.292811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.292818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.293025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.293032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.293324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.293331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.293549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.293556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.293852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.293859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.294175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.294182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.294379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.294387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.294707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.294714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.122 [2024-07-12 01:56:38.294929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.122 [2024-07-12 01:56:38.294936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.122 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.295193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.295200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.295527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.295534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.295849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.295856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.296195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.296202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.296591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.296598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.296834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.296841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.297049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.297056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.297396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.297403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.297604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.297610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.297958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.297965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.298285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.298295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.298346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.298353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.298677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.298684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.298917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.298924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.299251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.299258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.299654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.299661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.300007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.300014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.300327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.300334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.300553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.300560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.300863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.300870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.301240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.301246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.301630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.301637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.301949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.301956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.302281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.302288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.302597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.302603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.302923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.302929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.303172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.303178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.303572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.303579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.303925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.303931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.304254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.304261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.304649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.304657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.304971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.304978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.305300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.305306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.305675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.305682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.305922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.305929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.306280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.306286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.306613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.306620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.306936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.306943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.307239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.307246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.307642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.307650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.123 [2024-07-12 01:56:38.308006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.123 [2024-07-12 01:56:38.308012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.123 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.308198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.308205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.308512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.308519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.308760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.308767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.308950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.308956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.309216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.309223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.309552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.309559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.309767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.309774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.309975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.309983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.310325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.310332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.310702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.310710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.311030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.311037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.311239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.311246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.311647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.311655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.312058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.312065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.312471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.312478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.312802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.312810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.312939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.312946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.313250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.313258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.313641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.313648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.313991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.313998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.314346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.314353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.314684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.314691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.315033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.315039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.315368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.315376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.315584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.315591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.315930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.315936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.316269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.316276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.316601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.316609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.316803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.316810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.317060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.317067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.317368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.317375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.317737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.317744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.318098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.318104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.318451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.318459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.318795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.318801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.319014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.319021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.319313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.319321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.319663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.319670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.319880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.319887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.320236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.320243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.124 [2024-07-12 01:56:38.320551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.124 [2024-07-12 01:56:38.320558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.124 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.320701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.320708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.320956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.320962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.321301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.321309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.321617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.321624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.321849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.321856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.322224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.322235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.322609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.322615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.322954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.322960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.323305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.323312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.323632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.323639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.323974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.323981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.324269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.324276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.324572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.324579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.324794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.324801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.325004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.325011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.325272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.325279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.325605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.325611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.325801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.325808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.326110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.326117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.326474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.326482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.326834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.326842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.327082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.327088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.327266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.327274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.327487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.327494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.327802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.327809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.328145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.328153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.328510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.328516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.328760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.328767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.329068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.329075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.329461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.329467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 [2024-07-12 01:56:38.329621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.125 [2024-07-12 01:56:38.329627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.125 qpair failed and we were unable to recover it. 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Write completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Write completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Write completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Write completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Write completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Read completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.125 Write completed with error (sct=0, sc=8) 00:38:12.125 starting I/O failed 00:38:12.126 Read completed with error (sct=0, sc=8) 00:38:12.126 starting I/O failed 00:38:12.126 Read completed with error (sct=0, sc=8) 00:38:12.126 starting I/O failed 00:38:12.126 [2024-07-12 01:56:38.330356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:12.126 [2024-07-12 01:56:38.330823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.330865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df4000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.331099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.331126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df4000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.331634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.331722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df4000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.332073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.332082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.332438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.332466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.332698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.332707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.332944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.332952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.333266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.333273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.333612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.333619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.333828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.333834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.334180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.334187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.334515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.334523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.334838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.334845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.335181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.335188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.335514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.335522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.335845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.335852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.336224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.336243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.336631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.336647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.336980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.336987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.337258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.337265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.337614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.337621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.337949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.337956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.338155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.338161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.338463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.338471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.338828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.338835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.339140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.339147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.339475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.339482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.339791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.339799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.340153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.340161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.340499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.340506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.340824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.340831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.341191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.341198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.341422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.341429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.341668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.126 [2024-07-12 01:56:38.341676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.126 qpair failed and we were unable to recover it. 00:38:12.126 [2024-07-12 01:56:38.342002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.342010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.342185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.342193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.342502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.342510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.342838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.342847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.343177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.343185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.343525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.343533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.343850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.343858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.344207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.344215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.344458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.344466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.344809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.344817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.345176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.345183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.345530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.345537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.345725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.345733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.346029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.346037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.346368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.346375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.346577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.346584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.346957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.346964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.347278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.347286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.347613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.347619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.347943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.347950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.348276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.348283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.348511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.348518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.348860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.348867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.349207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.349214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.349564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.349571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.349873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.349880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.350210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.350216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.350583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.350589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.351044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.351051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.351375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.351382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.351737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.351744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.352059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.352066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.352294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.352301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.352652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.352660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.352975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.352982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.353306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.353314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.353682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.353690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.353916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.353923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.354239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.354246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.354554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.354561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.127 [2024-07-12 01:56:38.354895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.127 [2024-07-12 01:56:38.354902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.127 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.355237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.355244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.355563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.355570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.355899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.355908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.356257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.356265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.356651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.356658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.356978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.356985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.357298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.357305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.357641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.357647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.357968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.357976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.358325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.358332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.358661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.358669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.358996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.359002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.359396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.359403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.359695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.359702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.360037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.360044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.360362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.360370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.360705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.360712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.361066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.361072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.361393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.361400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.361733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.361740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.362061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.362068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.362425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.362431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.362738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.362745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.363131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.363137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.363385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.363392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.363751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.363757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.364111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.364118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.364471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.364481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.364817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.364824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.365214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.365220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.365553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.365560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.365892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.365899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.366204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.366211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.366521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.366528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.366786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.366793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.367119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.367126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.367469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.367476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.367797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.367803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.368002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.368009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.368294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.368302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.128 [2024-07-12 01:56:38.368636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.128 [2024-07-12 01:56:38.368643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.128 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.368993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.369000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.369336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.369344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.369727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.369734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.370043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.370050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.370408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.370414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.370766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.370773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.371111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.371117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.371331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.371338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.371678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.371685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.371887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.371894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.372249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.372256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.372630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.372637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.372959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.372965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.373296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.373303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.373480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.373488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.373811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.373818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.374135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.374142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.374499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.374507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.374835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.374843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.375173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.375180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.375509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.375516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.375847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.375855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.376187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.376193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.376588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.376594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.376940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.376947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.377287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.377294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.377625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.377633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.377860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.377867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.378198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.378205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.378534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.378541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.378860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.378867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.379197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.379204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.379532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.379539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.379889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.379896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.380226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.380237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.380460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.380467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.380683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.380690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.380953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.380960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.381279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.381286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.381666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.381672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.129 [2024-07-12 01:56:38.382005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.129 [2024-07-12 01:56:38.382011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.129 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.382328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.382336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.382680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.382688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.382954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.382961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.383314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.383320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.383643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.383650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.384051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.384057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.384285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.384299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.384648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.384654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.385017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.385024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.385376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.385383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.385705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.385712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.386069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.386075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.386359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.386366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.386726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.386733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.387036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.387044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.387350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.387357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.387658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.387665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.387992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.387998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.388235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.388243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.388597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.388604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.388960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.388966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.389305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.389312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.389525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.389531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.389860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.389867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.390194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.390201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.390502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.390509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.390857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.390864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.391036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.391044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.391325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.391332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.391573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.391579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.391931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.391938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.392265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.392272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.392610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.392617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.392942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.130 [2024-07-12 01:56:38.392949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.130 qpair failed and we were unable to recover it. 00:38:12.130 [2024-07-12 01:56:38.393278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.393285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.393609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.393616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.393796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.393803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.394113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.394120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.394413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.394420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.394770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.394777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.395109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.395116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.395472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.395479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.395812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.395819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.396139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.396147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.396478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.396485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.396710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.396717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.397041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.397047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.397293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.397300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.397650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.397657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.397981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.397988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.398364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.398370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.398689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.398696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.399024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.399030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.399385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.399392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.399706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.399713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.400071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.400077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.400356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.400363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.400700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.400707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.401024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.401032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.401387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.401395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.401634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.401641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.401980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.401987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.402389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.402396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.402601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.402608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.402910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.402917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.403119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.403126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.403462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.403469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.403818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.403827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.404164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.404171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.404390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.404398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.404738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.404745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.404948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.404956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.405285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.405293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.405519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.405525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.405682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.131 [2024-07-12 01:56:38.405689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.131 qpair failed and we were unable to recover it. 00:38:12.131 [2024-07-12 01:56:38.406024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.406031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.406375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.406382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.406711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.406717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.407032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.407039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.407375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.407382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.407698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.407706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.408033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.408040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.408211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.408218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.408431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.408438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.408767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.408773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.409131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.409138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.409376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.409383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.409734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.409740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.410090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.410097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.410430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.410437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.410791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.410797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.410990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.410997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.411329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.411335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.411670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.411678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.412007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.412014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.412216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.412223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.412458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.412465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.412771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.412778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.413111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.413118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.413527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.413534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.413841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.413848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.414174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.414181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.414516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.414524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.414759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.414766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.415087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.415094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.415421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.415428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.415750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.415758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.416111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.416118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.416464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.416471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.416714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.416722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.417107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.417114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.417438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.417446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.417783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.417790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.418128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.418135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.418472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.418480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.132 [2024-07-12 01:56:38.418831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.132 [2024-07-12 01:56:38.418838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.132 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.419055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.419061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.419433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.419440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.419663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.419670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.420000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.420006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.420320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.420327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.420656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.420662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.420976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.420983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.421173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.421180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.421425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.421432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.421754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.421760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.422010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.422016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.422344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.422351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.422676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.422684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.423088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.423096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.423270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.423277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.423604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.423611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.423928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.423934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.424262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.424269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.424602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.424609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.424942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.424949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.425271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.425278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.425487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.425494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.425795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.425802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.426163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.426170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.426508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.426516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.426842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.426850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.427173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.427180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.427409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.427416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.427738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.427745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.428096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.428104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.428423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.428430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.428623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.428631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.428958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.428965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.429153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.429160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.429482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.429488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.429883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.429890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.430224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.430236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.430612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.430619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.430948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.430954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.431161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.431169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.133 [2024-07-12 01:56:38.431481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.133 [2024-07-12 01:56:38.431488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.133 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.431801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.431808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.432138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.432145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.432471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.432485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.432832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.432839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.433193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.433199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.433465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.433472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.433798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.433805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.434146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.434154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.434485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.434491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.434719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.434726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.435048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.435055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.435377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.435385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.435759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.435765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.436114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.436120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.436439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.436446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.436764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.436771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.437014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.437022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.437325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.437333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.437656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.437664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.437983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.437989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.438177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.438184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.438524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.438531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.438851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.438857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.439215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.439222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.439581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.439588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.439934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.439940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.440292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.440299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.440617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.440625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.440819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.440826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.441122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.441130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.441433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.441441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.441766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.441774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.442100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.442106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.442430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.134 [2024-07-12 01:56:38.442438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.134 qpair failed and we were unable to recover it. 00:38:12.134 [2024-07-12 01:56:38.442792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.442798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.443152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.443158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.443482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.443489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.443824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.443831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.444141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.444149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.444352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.444360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.444534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.444541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.444831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.444838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.445165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.445172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.445504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.445511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.445817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.445825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.446155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.446162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.446492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.446500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.446837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.446844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.447163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.447171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.447504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.447511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.447860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.447868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.448198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.448206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.448513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.448521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.448930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.448937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.449245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.449253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.449531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.449537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.135 [2024-07-12 01:56:38.449832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.135 [2024-07-12 01:56:38.449838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.135 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.450214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.450223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.450549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.450558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.450894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.450901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.451217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.451224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.451557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.451564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.451915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.451921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.452275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.452281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.452588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.452595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.452922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.452929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.453289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.453296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.453612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.453619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.453919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.453926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.454267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.454273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.410 qpair failed and we were unable to recover it. 00:38:12.410 [2024-07-12 01:56:38.454617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.410 [2024-07-12 01:56:38.454626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.454938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.454945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.455273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.455280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.455528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.455535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.455869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.455876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.456190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.456197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.456530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.456537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.456833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.456839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.457183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.457190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.457540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.457547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.457845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.457852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.458194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.458201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.458424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.458431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.458771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.458777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.459084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.459092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.459496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.459504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.459885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.459891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.460198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.460205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.460540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.460547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.460918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.460924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.461250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.461256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.461598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.461605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.461809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.461816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.462141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.462148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.462477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.462485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.462830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.462837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.463135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.463141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.463480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.463487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.463841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.463849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.464181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.464188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.464613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.464620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.464766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.464772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.465116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.465123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.465453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.465461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.465794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.465801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.466146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.466154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.466559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.466567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.466888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.466895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.467220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.467227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.467586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.467957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.411 [2024-07-12 01:56:38.467967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.411 qpair failed and we were unable to recover it. 00:38:12.411 [2024-07-12 01:56:38.468323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.468330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.469236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.469255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.469592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.469600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.469800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.469808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.470110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.470117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.470475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.470482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.470829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.470843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.471175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.471181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.471499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.471507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.471831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.471838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.472165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.472174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.472220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.472227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.472446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.472453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.472820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.472827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.473152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.473159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.473519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.473526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.473913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.473920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.474293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.474300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.474622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.474629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.474993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.475000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.475203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.475210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.475532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.475540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.475649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.475656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.475945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.475953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.476301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.476308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.476627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.476633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.476963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.476970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.477289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.477297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.477637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.477645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.477875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.477882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.478228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.478239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.478467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.478474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.478798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.478805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.479138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.479145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.479478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.479485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.479828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.479835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.480041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.480048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.480287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.480295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.480505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.480513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.480751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.480759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.412 qpair failed and we were unable to recover it. 00:38:12.412 [2024-07-12 01:56:38.481117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.412 [2024-07-12 01:56:38.481125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.481453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.481461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.481671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.481678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.481947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.481955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.482238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.482245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.482466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.482473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.482797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.482805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.483034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.483042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.483366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.483373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.483705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.483712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.484039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.484047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.484376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.484384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.484572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.484579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.484850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.484857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.485062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.485069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.485401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.485408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.485746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.485754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.486091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.486098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.486435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.486443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.486780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.486787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.486957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.486965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.487271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.487280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.487583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.487590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.487791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.487799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.488158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.488164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.488551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.488557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.488895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.488901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.489228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.489237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.489561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.489568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.489952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.489958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.490310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.490317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.490630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.490637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.490962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.490969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.491303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.491311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.491543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.491549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.491910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.491917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.492239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.492247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.492575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.492582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.492944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.492950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.493297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.493307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.493637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.413 [2024-07-12 01:56:38.493644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.413 qpair failed and we were unable to recover it. 00:38:12.413 [2024-07-12 01:56:38.493973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.493980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.494303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.494311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.494477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.494484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.494777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.494784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.494997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.495004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.495329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.495336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.495672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.495678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.496007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.496013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.496423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.496429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.496761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.496769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.497098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.497104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.497491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.497497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.497847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.497854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.498204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.498210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.498453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.498460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.498789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.498795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.499107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.499115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.499482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.499489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.499843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.499849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.500179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.500185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.500394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.500401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.500759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.500766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.501097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.501104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.501437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.501444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.501763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.501770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.502124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.502131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.502488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.502495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.502856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.502862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.503209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.503217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.503541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.503548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.503782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.503789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.503980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.503987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.504321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.504328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.414 [2024-07-12 01:56:38.504554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.414 [2024-07-12 01:56:38.504560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.414 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.504909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.504915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.505253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.505259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.505576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.505582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.505899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.505906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.506247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.506255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.506479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.506487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.506737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.506743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.507085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.507091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.507372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.507379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.507712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.507719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.508043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.508051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.508388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.508395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.508725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.508732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.508966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.508972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.509192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.509198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.509573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.509580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.509902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.509911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.510243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.510250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.510486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.510492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.510714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.510720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.511039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.511046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.511347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.511353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.511697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.511704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.512060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.512068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.512391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.512398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.512730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.512737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.513066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.513073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.513415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.513422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.513782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.513788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.514119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.514126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.514545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.514551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.514880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.514887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.515092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.515099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.515404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.515411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.515743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.515750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.516082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.516088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.516440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.516447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.516660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.516667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.516879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.415 [2024-07-12 01:56:38.516886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.415 qpair failed and we were unable to recover it. 00:38:12.415 [2024-07-12 01:56:38.517285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.517292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.517622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.517629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.517803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.517811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.518156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.518163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.518510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.518518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.518706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.518715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.518947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.518954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.519284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.519292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.519758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.519765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.520079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.520086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.520419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.520426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.520761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.520776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.521132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.521140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.521484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.521491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.521719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.521726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.522054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.522061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.522363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.522370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.522667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.522674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.522929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.522936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.523167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.523174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.523444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.523451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.523775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.523783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.524096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.524103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.524300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.524309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.524629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.524636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.524973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.524980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.525275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.525282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.525514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.525521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.525767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.525773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.526071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.526079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.526390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.526397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.526733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.526740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.527082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.527089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.527283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.527290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.527602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.527609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.527948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.527955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.528055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.528062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.528310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.528318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.528722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.528738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.528972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.416 [2024-07-12 01:56:38.528979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.416 qpair failed and we were unable to recover it. 00:38:12.416 [2024-07-12 01:56:38.529290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.529297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.529509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.529515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.529853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.529860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.530192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.530199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.530485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.530492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.530842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.530851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.531221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.531228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.531458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.531464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.531805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.531812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.532201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.532208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.532405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.532412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.532711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.532718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.533035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.533042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.533398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.533404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.533716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.533724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.534052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.534059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.534359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.534365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.534742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.534750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.534906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.534913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.535251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.535258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.535502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.535509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.535715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.535722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.536025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.536031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.536274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.536281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.536505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.536512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.536840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.536847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.537171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.537179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.537351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.537359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.537688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.537695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.538004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.538011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.538350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.538356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.538675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.538682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.538991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.538999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.539219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.539226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.539550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.539557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.539919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.539925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.540237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.540244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.540321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.540326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.540688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.540695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.541008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.541015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.417 [2024-07-12 01:56:38.541379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.417 [2024-07-12 01:56:38.541386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.417 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.541706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.541713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.542045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.542052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.542380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.542388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.542719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.542726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.542920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.542927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.543293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.543308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.543643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.543650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.543960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.543968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.544297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.544304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.544625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.544632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.544953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.544960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.545004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.545011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.545339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.545346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.545420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.545426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.545729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.545735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.545894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.545902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.546263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.546270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.546518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.546525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.546838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.546846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.546914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.546922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.547217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.547223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.547565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.547572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.547911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.547919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.548234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.548241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.548583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.548590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.548990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.548997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.549244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.549252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.549472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.549478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.549798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.549805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.550140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.550147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.550352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.550359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.550677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.550685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.551004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.551011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.551326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.551333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.551680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.551687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.552016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.552023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.552363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.552369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.552583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.552589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.552969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.552977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.418 [2024-07-12 01:56:38.553205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.418 [2024-07-12 01:56:38.553212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.418 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.553531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.553538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.553953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.553959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.554195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.554202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.554529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.554536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.554735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.554742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.555133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.555140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.555482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.555489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.555886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.555892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.419 qpair failed and we were unable to recover it. 00:38:12.419 [2024-07-12 01:56:38.556246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.419 [2024-07-12 01:56:38.556253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.556470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.556476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.556742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.556749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.557094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.557103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.557434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.557441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.557764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.557771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.558099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.558106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.558314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.558321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.558677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.558683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.558798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.558804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.559047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.559053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.559395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.559403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.559707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.559714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.560047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.560054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.560254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.560261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.560617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.560623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.560857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.560864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.561190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.561196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.561525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.561532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.561760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.561766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.562108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.562115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.562357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.562364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.562684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.562691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.563010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.563018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.563323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.563330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.563634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.563640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.563860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.563868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.564263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.564271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.564585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.564591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.564781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.564788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.565222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.565236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.565431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.565438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.565746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.565753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.566018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.420 [2024-07-12 01:56:38.566025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.420 qpair failed and we were unable to recover it. 00:38:12.420 [2024-07-12 01:56:38.566347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.566355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.566683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.566690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.567112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.567118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.567342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.567349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.567684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.567690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.568008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.568014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.568440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.568447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.568625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.568633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.568957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.568964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.569297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.569304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.569616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.569622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.569946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.569953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.570265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.570271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.570658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.570664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.570909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.570917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.571271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.571278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.571635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.571642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.571854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.571860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.572054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.572061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.572396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.572403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.572741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.572747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.573059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.573067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.573386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.573393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.573728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.573735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.574048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.574054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.574376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.574383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.574596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.574603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.574940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.574946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.575309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.575316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.575656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.575664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.575975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.575983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.576337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.576344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.576506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.576513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.576884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.576890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.577222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.577231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.577535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.577542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.577958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.577966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.578280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.578287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.578644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.578650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.579059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.421 [2024-07-12 01:56:38.579067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.421 qpair failed and we were unable to recover it. 00:38:12.421 [2024-07-12 01:56:38.579404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.579411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.579755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.579762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.580008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.580015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.580227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.580238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.580560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.580566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.580893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.580899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.581209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.581216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.581537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.581544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.581842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.581849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.582082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.582090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.582422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.582429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.582809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.582815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.583007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.583014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.583262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.583269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.583630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.583637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.583949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.583956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.584275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.584282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.584592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.584599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.584927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.584933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.585244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.585251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.585636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.585642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.585958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.585965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.586294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.586300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.586621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.586628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.586987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.586994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.587321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.587328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.587643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.587650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.588004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.588012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.588189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.588196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.588506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.588515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.588849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.588856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.589167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.589174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.589493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.589501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.589688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.589695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.590076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.590082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.590403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.590410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.590725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.590731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.591052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.591060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.591298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.591305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.591461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.591468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.422 [2024-07-12 01:56:38.591799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.422 [2024-07-12 01:56:38.591806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.422 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.592133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.592139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.592476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.592483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.592866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.592873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.593050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.593058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.593258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.593265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.593596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.593603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.593815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.593823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.594157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.594164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.594489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.594496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.594851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.594858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.595167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.595174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.595498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.595504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.595817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.595824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.596138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.596145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.596487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.596495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.596829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.596836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.597185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.597193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.597507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.597514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.597878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.597885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.598206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.598213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.598588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.598595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.598981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.598987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.599302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.599310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.599617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.599624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.599942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.599951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.600280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.600288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.601014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.601030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.601334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.601343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.601650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.601659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.601965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.601971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.602285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.602293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.602718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.602724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.603038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.603045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.603355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.603363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.603561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.603569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.603905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.603913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.604251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.604257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.604634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.604641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.604977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.604984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.423 [2024-07-12 01:56:38.605338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.423 [2024-07-12 01:56:38.605346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.423 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.605756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.605763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.606067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.606074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.606430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.606437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.606754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.606762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.607096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.607103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.607426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.607434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.607742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.607749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.608060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.608067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.608361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.608368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.608679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.608686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.608889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.608897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.609321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.609330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.609603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.609610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.609975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.609982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.610350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.610358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.610705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.610713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.611039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.611046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.611359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.611367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.611701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.611708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.612028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.612034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.612313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.612320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.612625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.612632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.612827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.612834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.613223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.613235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.613569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.613576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.613893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.613900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.614213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.614220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.614521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.614529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.614852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.614861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.615167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.615174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.615964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.424 [2024-07-12 01:56:38.615980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.424 qpair failed and we were unable to recover it. 00:38:12.424 [2024-07-12 01:56:38.616152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.616162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.616461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.616471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.616672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.616680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.617030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.617038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.617369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.617376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.617762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.617770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.618012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.618020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.618371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.618378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.618730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.618737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.619064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.619071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.619404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.619412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.619780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.619787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.620062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.620069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.620395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.620402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.620751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.620758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.621066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.621073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.621367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.621374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.621566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.621573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.621915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.621922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.622072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.622080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.622431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.622439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.622772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.622779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.623019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.623026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.623352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.623359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.623768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.623774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.624078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.624084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.624398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.624405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.624732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.624739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.625057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.625064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.625470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.625478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.625804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.625811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.626135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.626142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.626472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.626478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.626789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.626795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.627139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.627147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.627476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.627483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.627798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.627804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.628157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.628165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.628351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.425 [2024-07-12 01:56:38.628359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.425 qpair failed and we were unable to recover it. 00:38:12.425 [2024-07-12 01:56:38.628725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.628732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.629043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.629049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.629407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.629414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.629721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.629728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.630037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.630044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.630378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.630384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.630713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.630719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.631035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.631043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.631275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.631282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.631581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.631588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.631901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.631907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.632264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.632271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.632584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.632590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.632943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.632949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.633259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.633266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.633597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.633604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.633846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.633852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.634183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.634191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.634512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.634519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.634850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.634858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.635171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.635177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.635513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.635519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.635839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.635846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.636171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.636178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.636551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.636558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.636913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.636920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.637226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.637236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.637560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.637567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.637888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.637895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.638225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.638236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.638581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.638588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.638777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.638784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.639110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.639116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.639421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.639428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.639786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.639792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.640150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.640156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.640494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.640501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.640650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.640657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.426 qpair failed and we were unable to recover it. 00:38:12.426 [2024-07-12 01:56:38.640962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.426 [2024-07-12 01:56:38.640970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.641254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.641262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.641463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.641470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.641783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.641789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.642017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.642024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.642359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.642366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.642690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.642696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.643048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.643055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.643320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.643326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.643524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.643531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.643896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.643903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.644180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.644186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.644541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.644548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.644876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.644883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.645325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.645332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.645638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.645644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.645959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.645965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.646204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.646211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.646422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.646429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.646740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.646746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.646926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.646932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.647154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.647162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.647507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.647514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.647846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.648169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.648175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.648555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.648562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.648914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.648920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.649239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.649246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.649609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.649615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.649940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.649947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.650302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.650309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.650669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.650675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.650995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.651001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.651317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.651324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.651574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.651580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.651817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.651823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.652156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.652162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.652481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.652488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.652730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.652737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.653070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.427 [2024-07-12 01:56:38.653077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.427 qpair failed and we were unable to recover it. 00:38:12.427 [2024-07-12 01:56:38.653408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.653416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.653746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.653753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.654070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.654077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.654399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.654406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.654732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.654738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.655061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.655068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.655391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.655398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.655724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.655731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.656056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.656064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.656339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.656347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.656649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.656656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.656984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.656990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.657346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.657353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.657648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.657654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.657988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.657994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.658243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.658250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.658574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.658580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.658904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.658911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.659223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.659240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.659622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.659629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.659958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.659964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.660277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.660284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.660629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.660636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.660842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.660849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.661184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.661190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.661516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.661523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.661845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.661853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.662183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.662191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.662521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.662528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.662866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.662872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.663232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.663238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.663472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.663478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.663771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.663777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.664118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.664125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.428 [2024-07-12 01:56:38.664504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.428 [2024-07-12 01:56:38.664510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.428 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.664746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.664753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.665067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.665074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.665436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.665443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.665751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.665757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.666082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.666088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.666408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.666416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.666760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.666767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.667120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.667126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.667441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.667448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.667803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.667810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.668140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.668147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.668383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.668390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.668745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.668752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.669110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.669116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.669423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.669430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.669787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.669793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.670099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.670105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.670478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.670486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.670817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.670824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.671177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.671184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.671586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.671593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.671899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.671906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.672226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.672235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.672578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.672584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.672893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.672900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.673285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.673291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.673600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.673607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.673921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.673928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.674125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.674133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.674431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.674438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.674769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.674775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.675092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.675098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.675473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.675480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.675786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.675792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.676212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.676218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.676541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.676548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.676879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.676885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.677206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.677213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.429 qpair failed and we were unable to recover it. 00:38:12.429 [2024-07-12 01:56:38.677534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.429 [2024-07-12 01:56:38.677541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.677763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.677770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.678007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.678014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.678368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.678374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.678669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.678675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.678973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.678980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.679308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.679315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.679609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.679618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.679936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.679943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.680355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.680362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.680699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.680706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.681037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.681044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.681375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.681382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.681764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.681772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.682095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.682101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.682441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.682448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.682764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.682771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.683085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.683091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.683417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.683424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.683656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.683662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.684119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.684126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.684449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.684456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.684794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.684800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.685119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.685126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.685467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.685474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.685839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.685846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.686205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.686211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.686527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.686534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.686894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.686901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.687210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.687217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.687505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.687513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.687886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.687893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.688224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.688236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.688556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.688563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.688981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.688988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.689337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.689344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.689675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.689682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.690029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.690036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.690359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.690366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.430 [2024-07-12 01:56:38.690690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.430 [2024-07-12 01:56:38.690696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.430 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.691015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.691022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.691363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.691370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.691711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.691718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.692046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.692053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.692385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.692392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.692722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.692729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.693049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.693055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.693372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.693380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.693703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.693710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.694046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.694052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.694377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.694383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.694651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.694658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.694979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.694985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.695294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.695301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.695628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.695634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.696030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.696036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.696339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.696346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.696665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.696671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.696996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.697004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.697331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.697339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.697673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.697680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.698001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.698008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.698423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.698429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.698738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.698744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.699101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.699107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.699436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.699443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.699846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.699853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.700160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.700167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.700499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.700505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.700862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.700868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.701174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.701181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.701586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.701594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.701888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.701895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.702232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.702239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.702483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.702490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.702820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.702826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.703142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.703148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.703478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.703485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.703714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.431 [2024-07-12 01:56:38.703721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.431 qpair failed and we were unable to recover it. 00:38:12.431 [2024-07-12 01:56:38.704049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.704055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.704377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.704383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.704737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.704744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.705053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.705059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.705099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.705106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.705427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.705435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.705741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.705748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.706105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.706113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.706342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.706350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.706679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.706686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.707019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.707026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.707352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.707358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.707724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.707731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.708039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.708045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.708361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.708367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.708695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.708701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.708884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.708892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.709085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.709092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.709383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.709390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.709708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.709714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.710030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.710036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.710398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.710404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.710624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.710631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.710790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.710796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.711125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.711132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.711466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.711472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.711824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.711831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.712187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.712194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.712494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.712501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.712866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.712872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.713226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.713235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.713573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.713581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.713910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.713916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.714273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.714280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.714465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.714472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.714842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.714850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.715088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.715096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.715432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.715438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.715765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.715771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.432 qpair failed and we were unable to recover it. 00:38:12.432 [2024-07-12 01:56:38.716000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.432 [2024-07-12 01:56:38.716006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.716344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.716351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.716667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.716673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.716946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.716953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.717172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.717178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.717509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.717515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.717749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.717755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.718086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.718093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.718485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.718492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.718817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.718823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.719137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.719143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.719475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.719482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.719832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.719839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.720158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.720165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.720512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.720519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.720876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.720883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.721191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.721198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.721425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.721432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.721640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.721646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.721971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.721978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.722327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.722334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.722700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.722707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.723025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.723033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.723295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.723302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.723508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.723514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.723878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.723884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.724094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.724100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.724420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.724427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.724569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.724576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.724877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.724883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.725196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.725202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.725396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.725403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.725750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.725757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.726067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.726074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.726426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.726434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.433 qpair failed and we were unable to recover it. 00:38:12.433 [2024-07-12 01:56:38.726636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.433 [2024-07-12 01:56:38.726642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.726950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.726957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.727280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.727287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.727630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.727637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.727946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.727953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.728307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.728314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.728640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.728648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.728914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.728921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.729262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.729269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.729617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.729623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.729938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.729945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.730058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.730064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.730355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.730361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.730676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.730682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.730770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.730777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.731069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.731076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.731437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.731443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.731736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.731743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.732096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.732103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.732458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.732465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.732793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.732800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.733160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.733166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.733541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.733547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.733750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.733757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.734061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.734067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.734409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.734415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.734829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.734835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.735148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.735155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.735561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.735567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.735880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.735887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.736205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.736212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.736548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.736555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.736913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.736921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.737238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.737245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.737429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.737436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.737817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.737825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.738141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.738148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.738473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.738480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.738845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.738852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.739165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.434 [2024-07-12 01:56:38.739172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.434 qpair failed and we were unable to recover it. 00:38:12.434 [2024-07-12 01:56:38.739502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.739509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.739688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.739696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.739949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.739955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.740196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.740202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.740626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.740633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.740944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.740951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.741302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.741309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.741618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.741624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.741956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.741962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.742358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.742365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.742691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.742699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.743029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.743036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.743376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.743383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.743713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.743720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.744043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.744049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.744364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.744371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.744555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.744563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.744853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.744859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.745097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.745105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.745267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.745274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.745601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.745608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.745971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.745977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.746330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.746337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.746585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.746592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.746780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.746787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.747146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.747153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.747483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.747490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.747847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.747854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.748209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.748217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.748538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.748546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.748866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.748872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.749228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.749238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.749553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.749559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.749920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.749926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.750158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.750164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.750505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.750512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.750705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.750712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.750914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.750921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.751251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.751257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.751592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.751598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.435 qpair failed and we were unable to recover it. 00:38:12.435 [2024-07-12 01:56:38.751826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.435 [2024-07-12 01:56:38.751832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.436 qpair failed and we were unable to recover it. 00:38:12.436 [2024-07-12 01:56:38.752168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.436 [2024-07-12 01:56:38.752177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.436 qpair failed and we were unable to recover it. 00:38:12.436 [2024-07-12 01:56:38.752545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.436 [2024-07-12 01:56:38.752552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.436 qpair failed and we were unable to recover it. 00:38:12.436 [2024-07-12 01:56:38.752888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.436 [2024-07-12 01:56:38.752895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.436 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.753221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.753233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.753583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.753590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.753953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.753959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.754271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.754278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.754516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.754522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.754872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.754878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.755161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.755168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.755545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.755552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.755886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.755894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.756238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.756245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.756600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.756606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.756938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.756946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.757162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.757169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.757509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.757516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.757847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.757854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.758176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.758183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.758498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.758506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.758837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.758844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.759198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.759205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.759403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.759410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.759742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.759749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.760058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.760064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.760422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.760430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.760808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.760814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.761122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.761129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.761357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.761364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.761699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.761705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.762018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.762024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.762339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.762346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.711 [2024-07-12 01:56:38.762580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.711 [2024-07-12 01:56:38.762586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.711 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.762927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.762934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.763238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.763245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.763553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.763559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.763874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.763880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.764264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.764271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.764479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.764486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.764676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.764684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.765000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.765008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.765343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.765350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.765667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.765674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.766038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.766045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.766358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.766365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.766739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.766746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.767069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.767076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.767413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.767419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.767622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.767629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.767977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.767983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.768340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.768347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.768590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.768598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.768922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.768929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.769291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.769298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.769485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.769493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.769802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.769809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.770119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.770125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.770516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.770522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.770831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.770837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.771192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.771198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.771400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.771407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.771700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.771707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.772038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.772045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.772405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.772412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.772718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.772724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.773080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.773086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.773319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.773326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.773657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.773663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.773983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.773989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.774347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.774354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.774676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.774682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.774884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.774891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.712 [2024-07-12 01:56:38.775236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.712 [2024-07-12 01:56:38.775243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.712 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.775601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.775607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.775961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.775968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.776275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.776282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.776512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.776518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.776864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.776871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.777179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.777185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.777415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.777423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.777761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.777768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.778092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.778098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.778442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.778448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.778786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.778793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.779113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.779119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.779428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.779435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.779763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.779769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.780089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.780096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.780423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.780431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.780784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.780792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.781126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.781133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.781457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.781464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.781791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.781797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.782110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.782116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.782470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.782477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.782831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.782838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.783069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.783076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.783314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.783321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.783536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.783543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.783936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.783942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.784264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.784271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.784697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.784703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.785021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.785028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.785260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.785268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.785606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.785612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.785963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.785970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.786369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.786376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.786728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.786735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.787041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.787047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.787370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.787377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.787753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.787759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.788154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.788161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.788466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.713 [2024-07-12 01:56:38.788474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.713 qpair failed and we were unable to recover it. 00:38:12.713 [2024-07-12 01:56:38.788792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.788799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.789083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.789091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.789506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.789512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.789839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.789846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.790162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.790168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.790530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.790537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.790845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.790852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.791080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.791088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.791413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.791419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.791774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.791781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.792101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.792108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.792424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.792431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.792765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.792771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.793106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.793113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.793465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.793472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.793797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.793804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.794132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.794138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.794363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.794371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.794746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.794752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.795064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.795071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.795453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.795460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.795766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.795772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.796129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.796135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.796464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.796471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.796653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.796660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.796965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.796971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.797202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.797208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.797558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.797565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.797806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.797813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.797974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.797981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.798290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.798297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.798596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.798603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.798959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.798965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.799337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.799344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.799680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.799687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.799922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.799929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.800284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.800291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.800472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.800479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.800812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.801119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.801125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.714 qpair failed and we were unable to recover it. 00:38:12.714 [2024-07-12 01:56:38.801353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.714 [2024-07-12 01:56:38.801359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.801667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.801673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.802007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.802013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.802372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.802379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.802704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.802710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.803026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.803033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.803363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.803371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.803716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.803725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.804061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.804067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.804390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.804397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.804617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.804624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.804993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.804999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.805309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.805315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.805627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.805633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.805963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.805970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.806303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.806310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.806612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.806618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.806962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.806970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.807278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.807285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.807594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.807601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.807954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.807960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.808268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.808275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.808542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.808549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.808716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.808722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.809027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.809034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.809327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.809334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.809669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.809675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.809988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.809994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.810335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.810342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.810649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.810656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.810972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.810979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.811291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.811298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.811571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.811577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.715 [2024-07-12 01:56:38.811893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.715 [2024-07-12 01:56:38.811900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.715 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.812225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.812234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.812538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.812544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.812872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.812879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.813283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.813291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.813570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.813576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.813898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.813904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.814258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.814265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.814573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.814579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.814895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.814901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.815225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.815235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.815604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.815611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.815931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.815937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.816251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.816258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.816588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.816596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.816909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.816915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.817266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.817273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.817508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.817514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.817845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.817851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.818205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.818211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.818527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.818534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.818802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.818808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.819163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.819171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.819216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.819224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.819538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.819544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.819851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.819858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.820053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.820060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.820408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.820415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.820728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.820735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.821053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.821060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.821374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.821381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.821700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.821707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.822018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.822025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.822338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.822345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.822673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.822679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.822918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.822924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.823242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.823249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.823577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.823583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.823913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.823921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.824197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.824204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.824440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.716 [2024-07-12 01:56:38.824448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.716 qpair failed and we were unable to recover it. 00:38:12.716 [2024-07-12 01:56:38.824781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.824788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.825106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.825113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.825466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.825473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.825795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.825801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.826004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.826011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.826302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.826310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.826638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.826645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.827000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.827007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.827316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.827323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.827651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.827658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.827981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.827989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.828185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.828192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.828546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.828552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.828797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.828805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.829139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.829146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.829471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.829478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.829670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.829678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.829969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.829976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.830149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.830157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.830501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.830508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.830822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.830829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.831155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.831162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.831540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.831546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.831899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.831905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.832262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.832269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.832588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.832595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.832793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.832800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.833123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.833131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.833430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.833437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.833745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.833751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.834067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.834074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.834305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.834312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.834646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.834653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.834863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.834870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.835201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.835208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.835528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.835535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.835848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.835854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.836163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.836169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.836504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.836511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.836859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.836865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.717 qpair failed and we were unable to recover it. 00:38:12.717 [2024-07-12 01:56:38.837220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.717 [2024-07-12 01:56:38.837226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.837531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.837538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.837920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.837927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.838238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.838246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.838595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.838602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.838921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.838927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.839143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.839149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.839488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.839495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.839828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.839835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.840157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.840163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.840558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.840565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.840745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.840752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.841062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.841069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.841382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.841390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.841754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.841760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.841941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.841947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.842178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.842185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.842492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.842498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.842836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.842842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.843235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.843242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.843570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.843576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.843821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.843828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.844147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.844154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.844478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.844485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.844677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.844684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.845024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.845031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.845267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.845274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.845438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.845446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.845732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.845738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.846062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.846069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.846388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.846395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.846709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.846716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.847024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.847030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.847365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.847372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.847698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.847704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.847874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.847881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.848098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.848105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.848428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.848435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.848755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.848761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.849118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.849124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.849464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.718 [2024-07-12 01:56:38.849471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.718 qpair failed and we were unable to recover it. 00:38:12.718 [2024-07-12 01:56:38.849823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.849829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.850136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.850143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.850541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.850548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.850786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.850793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.851105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.851112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.851217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.851224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.851435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.851442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.851678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.851684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.852014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.852020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.852336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.852342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.852706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.852714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.853044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.853052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.853395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.853403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.853757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.853764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.854079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.854086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.854395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.854402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.854803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.854810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.855142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.855148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.855516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.855523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.855837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.855844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.856241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.856249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.856554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.856561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.856877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.856883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.857201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.857208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.857443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.857450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.857807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.857814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.858056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.858063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.858397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.858404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.858715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.858723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.859056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.859063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.859488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.859495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.859798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.859804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.860121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.860127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.860473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.860480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.860835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.719 [2024-07-12 01:56:38.860842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.719 qpair failed and we were unable to recover it. 00:38:12.719 [2024-07-12 01:56:38.861069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.861076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.861405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.861411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.861750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.861757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.862077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.862084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.862321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.862329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.862673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.862680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.863035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.863041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.863367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.863374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.863589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.863596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.863825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.863832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.864145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.864152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.864484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.864491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.864799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.864805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.865195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.865202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.865519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.865526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.865882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.865889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.866211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.866218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.866547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.866556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.866885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.866891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.867205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.867212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.867526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.867533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.867836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.867843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.868162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.868169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.868495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.868502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.868813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.868820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.869174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.869181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.869422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.869430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.869762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.869769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.870090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.870096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.870421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.870427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.870769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.870775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.871100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.871106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.871462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.871469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.871646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.871652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.871947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.871953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.872277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.872285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.872616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.872622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.872987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.872994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.873301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.873308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.873653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.873660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.720 qpair failed and we were unable to recover it. 00:38:12.720 [2024-07-12 01:56:38.873974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.720 [2024-07-12 01:56:38.873980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.874297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.874304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.874635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.874642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.874997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.875003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.875242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.875251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.875605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.875612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.875898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.875905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.876256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.876264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.876339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.876346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.876641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.876647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.876955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.876961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.877276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.877283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.877670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.877676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.877980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.877987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.878349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.878356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.878679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.878685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.879034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.879040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.879372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.879379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.879719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.879727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.879913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.879921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.880252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.880259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.880582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.880589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.880944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.880950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.881258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.881265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.881607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.881613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.881925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.881932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.882119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.882126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.882339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.882345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.882710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.882717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.882961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.882967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.883316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.883323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.883656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.883662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.883975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.883981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.884223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.884231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.884569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.884576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.884893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.884901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.885252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.885259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.885587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.885593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.885918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.885924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.886244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.886251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.721 [2024-07-12 01:56:38.886485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.721 [2024-07-12 01:56:38.886492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.721 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.886847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.886853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.887217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.887224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.887550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.887556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.887909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.887916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.888265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.888273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.888462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.888469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.888779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.888786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.889104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.889111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.889475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.889482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.889718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.889724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.890066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.890072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.890424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.890431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.890766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.890773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.891081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.891089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.891418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.891426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.891782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.891788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.892108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.892114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.892349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.892356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.892711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.892718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.893082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.893089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.893333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.893340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.893562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.893570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.893955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.893962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.894284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.894291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.894639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.894647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.894978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.894985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.895319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.895326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.895666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.895672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.896001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.896008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.896327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.896334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.896655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.896662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.897013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.897020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.897214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.897221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.897551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.897558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.897835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.897842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.898201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.898208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.898540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.898547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.898902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.898909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.899218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.899225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.899525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.899531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.722 [2024-07-12 01:56:38.899847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.722 [2024-07-12 01:56:38.899854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.722 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.900185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.900191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.900522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.900529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.900884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.900893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.901222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.901232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.901475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.901482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.901802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.901809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.902019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.902026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.902370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.902377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.902736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.902742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.903051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.903057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.903442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.903449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.903763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.903769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.904081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.904087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.904424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.904430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.904756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.904763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.905097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.905104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.905427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.905434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.905740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.905746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.906091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.906097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.906533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.906540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.906851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.906857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.907179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.907186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.907532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.907538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.907872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.907879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.908279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.908286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.908507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.908514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.908850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.908856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.909222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.909228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.909584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.909591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.909948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.909954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.910271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.910277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.910636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.910643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.910950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.910956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.911138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.911146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.911479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.911486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.911795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.911802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.912123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.912129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.912486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.912492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.912852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.912858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.913208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.913214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.723 [2024-07-12 01:56:38.913524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.723 [2024-07-12 01:56:38.913531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.723 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.913847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.913854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.913960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.913968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.914254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.914262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.914469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.914475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.914804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.914810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.915005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.915012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.915320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.915327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.915628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.915634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.915862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.915869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.916158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.916164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.916481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.916487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.916677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.916683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.916992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.917222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.917232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.917565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.917572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.917888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.917894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.918120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.918126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.918418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.918426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.918733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.918741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.918969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.918975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.919330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.919336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.919508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.919515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.919918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.919924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.920244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.920251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.920591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.920597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.920912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.920918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.921263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.921269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.921606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.921614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.921966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.921973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.922288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.922296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.922492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.922499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.922821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.922828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.923190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.923196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.923575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.923581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.724 [2024-07-12 01:56:38.923937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.724 [2024-07-12 01:56:38.923943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.724 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.924327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.924334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.924671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.924677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.925064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.925070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.925394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.925402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.925747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.925754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.926109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.926116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.926523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.926532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.926848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.926855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.927249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.927255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.927561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.927568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.927888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.927895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.928252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.928258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.928588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.928595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.928925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.928932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.929262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.929269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.929626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.929632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.929839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.929846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.930147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.930154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.930472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.930478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.930865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.930872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.931185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.931191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.931515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.931522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.931895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.931901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.932277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.932285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.932626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.932633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.932955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.932961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.933287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.933294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.933658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.933664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.933979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.933985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.934351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.934358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.934670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.934677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.935025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.935031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.935323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.935331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.935698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.935705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.936041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.936049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.936423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.936430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.936733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.936740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.936958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.936964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.937298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.937305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.937558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.937564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.725 qpair failed and we were unable to recover it. 00:38:12.725 [2024-07-12 01:56:38.937874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.725 [2024-07-12 01:56:38.937880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.938198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.938205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.938511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.938518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.938876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.938883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.939085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.939092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.939428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.939435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.939789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.939797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.940153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.940160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.940430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.940437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.940763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.940770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.941000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.941007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.941346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.941353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.941708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.941715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.942025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.942032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.942381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.942388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.942603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.942609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.942941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.942947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.943280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.943286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.943567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.943574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.943940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.943947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.944275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.944283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.944608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.944614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.944924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.944930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.945276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.945283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.945520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.945526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.945841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.945848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.946182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.946188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.946504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.946511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.946870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.946877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.947188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.947195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.947515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.947522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.947877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.947885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.948215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.948221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.948554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.948561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.948866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.948872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.949232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.949239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.949594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.949600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.949907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.949914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.950266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.950273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.950627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.950634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.950819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.726 [2024-07-12 01:56:38.950826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.726 qpair failed and we were unable to recover it. 00:38:12.726 [2024-07-12 01:56:38.951127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.951134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.951465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.951472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.951780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.951786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.951978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.951985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.952323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.952330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.952642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.952650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.953002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.953008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.953320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.953327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.953708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.953714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.953980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.953986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.954323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.954330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.954664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.954671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.955072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.955079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.955410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.955417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.955706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.955712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.956066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.956074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.956384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.956391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.956598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.956606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.956975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.956982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.957297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.957304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.957599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.957606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.957952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.957959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.958285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.958293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.958652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.958659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.959014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.959021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.959219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.959227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.959597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.959605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.959954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.959960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.960276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.960282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.960622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.960628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.960985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.960991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.961345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.961352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.961690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.961697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.962027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.962034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.962222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.962240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.962430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.962438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.962739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.962747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.962978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.962984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.963309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.963316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.963632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.963638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.727 [2024-07-12 01:56:38.963951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.727 [2024-07-12 01:56:38.963958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.727 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.964321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.964328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.964649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.964656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.964983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.964990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.965357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.965364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.965703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.965712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.966034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.966041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.966223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.966234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.966574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.966580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.966942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.966948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.967193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.967199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.967540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.967547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.967902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.967908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.968085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.968092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.968437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.968444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.968750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.968757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.969063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.969069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.969372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.969379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.969687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.969694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.970009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.970016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.970327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.970334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.970642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.970648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.970885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.970893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.971083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.971091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.971401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.971408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.971736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.971742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.972090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.972097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.972420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.972427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.972748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.972755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.973068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.973074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.973388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.973395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.973711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.973717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.974116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.974124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.974475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.974482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.974805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.974811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.975131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.728 [2024-07-12 01:56:38.975137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.728 qpair failed and we were unable to recover it. 00:38:12.728 [2024-07-12 01:56:38.975458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.975464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.975774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.975781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.976098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.976104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.976419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.976426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.976779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.976786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.977144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.977151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.977473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.977481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.977809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.977815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.978131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.978137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.978466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.978474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.978823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.978830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.979148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.979154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.979490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.979497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.979681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.979688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.979979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.979985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.980292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.980299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.980624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.980631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.980902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.980909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.981233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.981239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.981575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.981581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.981897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.981903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.982194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.982200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.982544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.982551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.982748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.982756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.983084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.983091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.983427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.983434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.983791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.983797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.984117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.984124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.984460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.984466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.984737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.984743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.985091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.985097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.985505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.985513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.985835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.985842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.986147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.986153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.986472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.986478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.986795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.986801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.987125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.987132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.987369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.987376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.987600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.987607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.988027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.988033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.729 [2024-07-12 01:56:38.988338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.729 [2024-07-12 01:56:38.988344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.729 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.988750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.988756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.989115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.989122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.989477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.989483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.989707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.989714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.990006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.990012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.990329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.990335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.990734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.990740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.990914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.990922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.991179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.991187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.991520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.991527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.991892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.991899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.992207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.992213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.992546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.992553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.992922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.992929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.993238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.993244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.993604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.993610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.993927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.993934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.994284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.994291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.994632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.994638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.994816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.994823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.995116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.995122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.995333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.995340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.995664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.995671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.995979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.995985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.996221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.996228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.996563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.996570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.996894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.996900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.997226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.997236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.997585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.997591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.997943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.997950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.998173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.998179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.998510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.998517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.998791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.998798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.999136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.999143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.999526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.999533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:38.999852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:38.999859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:39.000054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:39.000061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:39.000405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:39.000412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:39.000631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:39.000638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:39.000987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.730 [2024-07-12 01:56:39.000994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.730 qpair failed and we were unable to recover it. 00:38:12.730 [2024-07-12 01:56:39.001343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.001350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.001750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.001757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.002069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.002075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.002308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.002314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.002640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.002647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.002962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.002969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.003343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.003350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.003687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.003695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.004023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.004031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.004385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.004391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.004631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.004638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.004964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.004971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.005288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.005295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.005643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.005650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.005879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.005885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.006149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.006155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.006492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.006498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.006814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.006820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.007135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.007141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.007479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.007487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.007717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.007724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.008112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.008120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.008441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.008448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.008772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.008778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.009104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.009110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.009423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.009430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.009654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.009660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.009964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.009971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.010303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.010310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.010500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.010508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.010811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.010818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.011129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.011136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.011478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.011485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.011789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.011795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.012021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.012027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.012381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.012388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.012722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.012728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.013044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.013050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.013369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.013376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.013702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.013709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.731 qpair failed and we were unable to recover it. 00:38:12.731 [2024-07-12 01:56:39.014039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.731 [2024-07-12 01:56:39.014046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.014372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.014379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.014712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.014718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.015034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.015040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.015369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.015376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.015535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.015543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.015931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.015938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.016291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.016298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.016552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.016560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.016873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.016879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.017202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.017209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.017566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.017574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.017902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.017910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.018236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.018242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.018607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.018613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.018798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.018806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.019093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.019100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.019416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.019423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.019745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.019752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.020080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.020087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.020447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.020453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.020813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.020819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.021125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.021131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.021463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.021470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.021783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.021790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.022016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.022023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.022358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.022366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.022735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.022742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.022977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.022983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.023299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.023305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.023539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.023545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.023906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.023912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.024221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.024227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.024601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.024608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.024935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.732 [2024-07-12 01:56:39.024943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.732 qpair failed and we were unable to recover it. 00:38:12.732 [2024-07-12 01:56:39.025176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.025183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.025365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.025372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.025824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.025832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.026151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.026158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.026469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.026475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.026782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.026789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.027098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.027104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.027434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.027441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.027784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.027791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.028109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.028116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.028301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.028309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.028613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.028619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.028977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.028984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.029284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.029293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.029618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.029624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.029948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.029954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.030262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.030269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.030589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.030596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.030950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.030956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.031263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.031271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.031567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.031573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.031832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.031839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.032164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.032170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.032487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.032494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.032722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.032729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.032944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.032951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.033265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.033272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.033463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.033470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.033816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.033822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.034125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.034131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.034471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.034477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.034666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.034674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.035032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.035039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.035275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.035282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.035569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.035575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.035809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.035816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.036187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.036193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.036517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.036523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.036840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.036846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.037081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.037087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.733 qpair failed and we were unable to recover it. 00:38:12.733 [2024-07-12 01:56:39.037443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.733 [2024-07-12 01:56:39.037451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.037811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.037818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.038075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.038082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.038395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.038402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.038728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.038734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.039080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.039086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.039418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.039424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.039754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.039760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.040155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.040161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.040426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.040433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.040752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.040759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.041087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.041094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.041424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.041431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.041780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.041787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.042143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.042150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.042348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.042355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.042702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.042708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.043038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.043045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.043302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.043310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.043654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.043660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.043971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.043977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.044302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.044309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.044633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.044639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.045011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.045017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.045443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.045450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.045752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.045758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.046070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.046077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.046312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.046319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.046650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.046657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.046882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.046888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.047254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.047261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.047665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.047672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.047974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.047981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.048225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.048236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.048558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.048564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.048955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.048961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.049220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.049227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.049546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.049552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.049912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.049918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.050225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.050234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.734 [2024-07-12 01:56:39.050600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.734 qpair failed and we were unable to recover it. 00:38:12.734 [2024-07-12 01:56:39.050906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.050913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.051143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.051151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.051389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.051396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.051724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.051731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.052060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.052067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.052425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.052432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.052743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.052750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:12.735 [2024-07-12 01:56:39.053107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:12.735 [2024-07-12 01:56:39.053113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:12.735 qpair failed and we were unable to recover it. 00:38:13.010 [2024-07-12 01:56:39.053443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.010 [2024-07-12 01:56:39.053451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.010 qpair failed and we were unable to recover it. 00:38:13.010 [2024-07-12 01:56:39.053821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.010 [2024-07-12 01:56:39.053829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.010 qpair failed and we were unable to recover it. 00:38:13.010 [2024-07-12 01:56:39.054148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.054155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.054485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.054492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.054864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.054872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.055239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.055246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.055559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.055566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.055921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.055927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.056236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.056243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.056597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.056604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.056925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.056932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.057289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.057296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.057536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.057543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.057749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.057756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.058106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.058113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.058421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.058428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.058753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.058760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.059079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.059085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.059435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.059441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.059791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.059797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.060108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.060114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.060427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.060434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.060641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.060647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.060983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.060991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.061321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.061328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.061632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.061639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.061909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.061915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.062029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.062036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.062351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.062357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.062678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.062684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.062861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.062868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.063194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.063204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.063536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.063543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.063737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.063744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.064034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.064041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.064371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.064378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.064694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.064700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.065010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.065017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.065401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.065408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.065649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.065655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.065990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.065996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.066331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.011 [2024-07-12 01:56:39.066338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.011 qpair failed and we were unable to recover it. 00:38:13.011 [2024-07-12 01:56:39.066657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.066664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.067033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.067040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.067379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.067386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.067775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.067781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.068096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.068102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.068419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.068426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.068800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.068806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.069118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.069125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.069446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.069453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.069850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.069857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.070181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.070188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.070522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.070529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.070839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.070845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.071228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.071244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.071578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.071584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.071772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.071779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.072121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.072127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.072503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.072509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.072827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.072833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.073140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.073147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.073474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.073481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.073791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.073797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.074190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.074196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.074515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.074521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.074656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.074664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.074997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.075005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.075207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.075214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.075577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.075585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.075912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.075918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.076273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.076281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.076597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.076603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.076919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.076925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.077283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.077289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.077497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.077504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.077848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.077854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.078150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.078158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.078499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.078506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.078857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.078864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.079189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.079195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.079399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.079407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.012 qpair failed and we were unable to recover it. 00:38:13.012 [2024-07-12 01:56:39.079699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.012 [2024-07-12 01:56:39.079706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.080061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.080067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.080377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.080384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.080742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.080749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.081062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.081068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.081391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.081398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.081714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.081720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.081907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.081915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.082240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.082247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.082566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.082572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.082887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.082894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.083225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.083234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.083542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.083549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.083897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.083903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.084115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.084122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.084285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.084292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.084607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.084614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.084821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.084829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.085168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.085174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.085496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.085503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.085686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.085693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.085990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.085996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.086382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.086389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.086722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.086729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.087091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.087098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.087452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.087458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.087775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.087781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.087979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.087987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.088341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.088348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.088673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.088682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.088991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.088998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.089328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.089335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.089667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.089674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.089999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.090006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.090363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.090369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.090700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.090706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.091027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.091034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.091388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.091395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.091733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.091740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.092036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.092043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.092375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.013 [2024-07-12 01:56:39.092382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.013 qpair failed and we were unable to recover it. 00:38:13.013 [2024-07-12 01:56:39.092739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.092746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.093055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.093061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.093377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.093384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.093577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.093590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.093940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.093947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.094255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.094262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.094677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.094683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.095014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.095020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.095245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.095252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.095597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.095604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.095949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.095956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.096275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.096282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.096670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.096676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.096993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.097000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.097363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.097369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.097599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.097605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.097929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.097935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.098239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.098246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.098567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.098573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.098908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.098914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.099234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.099242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.099587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.099594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.099923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.099930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.100220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.100226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.100574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.100581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.100908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.100914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.101209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.101215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.101538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.101544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.101902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.101910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.102274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.102280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.102599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.102606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.102929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.102936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.103289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.103296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.103626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.103632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.103948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.103954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.104286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.104293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.104631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.104637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.104967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.104973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.105285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.105292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.105633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.105640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.014 qpair failed and we were unable to recover it. 00:38:13.014 [2024-07-12 01:56:39.105838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.014 [2024-07-12 01:56:39.105844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.106211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.106218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.106539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.106547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.106931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.106938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.107294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.107301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.107460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.107466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.107809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.107816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.108139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.108146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.108520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.108526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.108825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.108831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.109005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.109012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.109305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.109312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.109488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.109495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.109797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.109803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.110109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.110115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.110465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.110472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.110788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.110795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.111032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.111038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.111396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.111402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.111726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.111733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.112047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.112053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.112368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.112375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.112701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.112709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.112949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.112956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.113280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.113287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.113624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.113630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.113945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.113952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.114346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.114353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.114673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.114681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.114920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.114927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.115244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.115251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.115602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.115609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.115930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.115936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.015 [2024-07-12 01:56:39.116297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.015 [2024-07-12 01:56:39.116304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.015 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.116633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.116639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.116980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.116987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.117235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.117243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.117557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.117564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.117886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.117893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.118091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.118098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.118424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.118431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.118744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.118751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.119085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.119091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.119284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.119292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.119657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.119663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.119981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.119987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.120235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.120242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.120579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.120586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.120902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.120908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.121223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.121232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.121552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.121558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.121880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.121886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.122235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.122242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.122550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.122556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.122882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.122889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.123265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.123273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.123625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.123631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.123950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.123956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.124311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.124317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.124626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.124632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.124940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.124946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.125268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.125275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.125475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.125483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.125780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.125786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.126139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.126146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.126471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.126478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.126801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.126807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.127167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.127174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.127489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.127498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.127804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.127811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.128121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.128127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.128464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.128471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.128821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.128829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.129029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.016 [2024-07-12 01:56:39.129036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.016 qpair failed and we were unable to recover it. 00:38:13.016 [2024-07-12 01:56:39.129341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.129347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.129598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.129604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.129935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.129942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.130260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.130267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.130605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.130612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.130936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.130943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.131258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.131265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.131461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.131468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.131760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.131767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.131961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.131968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.132365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.132372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.132706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.132712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.133035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.133042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.133366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.133372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.133576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.133583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.133955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.133962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.134152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.134159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.134562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.134569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.134765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.134771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.135075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.135081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.135322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.135329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.135684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.135690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.136050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.136056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.136237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.136245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.136545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.136551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.136867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.136873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.137197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.137204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.137498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.137505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.137812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.137818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.138177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.138184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.138500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.138506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.138898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.138904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.139260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.139267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.139600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.139607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.139964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.139972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.140334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.140341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.140659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.140666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.140982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.140989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.141381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.141388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.141630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.141636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.017 qpair failed and we were unable to recover it. 00:38:13.017 [2024-07-12 01:56:39.141952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.017 [2024-07-12 01:56:39.141959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.142287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.142295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.142631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.142638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.143028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.143035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.143343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.143350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.143664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.143671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.144065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.144072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.144426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.144432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.144758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.144764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.145048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.145055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.145415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.145422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.145748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.145755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.146082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.146089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.146476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.146483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.146800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.146807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.147164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.147170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.147496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.147503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.147820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.147826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.148217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.148224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.148548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.148555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.148874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.148880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.149124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.149131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.149441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.149448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.149644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.149651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.150013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.150020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.150375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.150381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.150696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.150703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.151064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.151071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.151384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.151390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.151736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.151742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.151922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.151930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.152215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.152221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.152525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.152532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.152871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.152877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.153194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.153202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.153527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.153533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.153888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.153894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.154202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.154209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.154521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.154528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.154920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.154928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.018 [2024-07-12 01:56:39.155254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.018 [2024-07-12 01:56:39.155261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.018 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.155550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.155556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.155904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.155911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.156231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.156238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.156573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.156580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.156887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.156893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.157209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.157216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.157517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.157524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.157854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.157860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.158182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.158190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.158377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.158385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.158604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.158611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.158942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.158949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.159270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.159277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.159608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.159614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.159931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.159937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.160135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.160142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.160461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.160468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.160823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.160829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.161136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.161142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.161485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.161492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.161824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.161830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.162183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.162190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.162508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.162515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.162908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.162914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.163270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.163277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.163600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.163607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.163911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.163918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.164274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.164281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.164594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.164601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.164952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.164958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.165272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.165279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.165597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.165603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.165922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.165928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.166275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.166284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.166617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.019 [2024-07-12 01:56:39.166623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.019 qpair failed and we were unable to recover it. 00:38:13.019 [2024-07-12 01:56:39.166979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.166985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.167290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.167304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.167656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.167662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.167971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.167977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.168292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.168298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.168615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.168621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.168940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.168947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.169313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.169320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.169648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.169656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.169985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.169992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.170315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.170321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.170633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.170640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.170994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.171001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.171310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.171317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.171663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.171670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.171984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.171990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.172328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.172335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.172659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.172666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.172983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.172991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.173224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.173241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.173554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.173561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.173885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.173891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.174095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.174102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.174412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.174418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.174620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.174627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.174917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.174924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.175239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.175246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.175632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.175638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.175993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.176000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.176309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.176315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.176663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.176669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.176988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.176994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.177304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.177311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.177643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.177650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.178013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.178020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.178347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.178355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.178676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.178683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.178878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.178885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.179110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.179119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.179473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.179480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.020 qpair failed and we were unable to recover it. 00:38:13.020 [2024-07-12 01:56:39.179803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.020 [2024-07-12 01:56:39.179810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.180170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.180176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.180409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.180417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.180789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.180796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.181032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.181039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.181373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.181380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.181454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.181460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.181768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.181775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.182127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.182134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.182515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.182521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.182831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.182837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.183067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.183074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.183409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.183416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.183724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.183730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.184039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.184045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.184236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.184244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.184576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.184583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.184909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.184915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.185226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.185235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.185513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.185519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.185847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.185853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.186187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.186193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.186586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.186593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.186898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.186904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.187227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.187237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.187572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.187579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.187914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.187921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.188239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.188246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.188577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.188583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.188947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.188954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.189263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.189270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.189481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.189489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.189863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.189869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.190170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.190176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.190498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.190505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.190861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.190868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.191180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.191186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.191511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.191517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.191841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.191851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.192161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.192168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.192535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.021 [2024-07-12 01:56:39.192542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.021 qpair failed and we were unable to recover it. 00:38:13.021 [2024-07-12 01:56:39.192947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.192954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.193170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.193177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.193424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.193431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.193753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.193760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.194059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.194066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.194430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.194436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.194756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.194762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.195079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.195085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.195287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.195294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.195632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.195639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.195948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.195955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.196262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.196268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.196593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.196600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.196907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.196913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.197268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.197275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.197625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.197632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.197955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.197961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.198258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.198265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.198478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.198485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.198814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.198821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.199075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.199082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.199420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.199427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.199751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.199757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.200098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.200105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.200341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.200348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.200639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.200645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.200981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.200987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.201298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.201305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.201669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.201675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.202045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.202052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.202414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.202422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.202656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.202662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.202879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.202886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.203129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.203136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.203506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.203513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.203863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.203870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.204270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.204277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.204623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.204630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.204983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.204990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.205299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.205305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.022 [2024-07-12 01:56:39.205464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.022 [2024-07-12 01:56:39.205470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.022 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.205797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.205803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.206157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.206165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.206533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.206540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.206849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.206855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.207163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.207169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.207436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.207443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.207768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.207775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.208101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.208107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.208511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.208518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.208867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.208875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.209203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.209211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.209324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.209331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.209669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.209676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.210040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.210047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.210362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.210369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.210690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.210697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.211012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.211018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.211335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.211342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.211659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.211665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.211849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.211857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.212150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.212156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.212358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.212366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.212650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.212657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.212951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.212959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.213000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.213006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.213300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.213307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.213676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.213683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.213988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.213995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.214349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.214356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.214672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.214679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.214910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.214917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.215130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.215137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.215489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.215496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.215810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.215816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.023 [2024-07-12 01:56:39.216139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.023 [2024-07-12 01:56:39.216147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.023 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.216495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.216502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.216857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.216863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.217219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.217225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.217556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.217563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.217899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.217906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.218220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.218226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.218420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.218427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.218644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.218651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.218942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.218948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.219146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.219154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.219491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.219498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.219847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.219853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.220166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.220172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.220366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.220374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.220658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.220666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.220993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.221000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.221307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.221314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.221638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.221644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.222036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.222043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.222390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.222397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.222726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.222732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.223101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.223107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.223426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.223433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.223761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.223768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.224104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.224111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.224305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.224313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.224662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.224668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.225024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.225031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.225256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.225264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.225604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.225610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.225923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.225929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.226282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.226289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.226359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.226366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.226674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.226681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.226908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.226915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.227217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.227225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.227577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.227585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.227916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.227923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.228167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.228173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.228509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.228516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.024 qpair failed and we were unable to recover it. 00:38:13.024 [2024-07-12 01:56:39.228913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.024 [2024-07-12 01:56:39.228919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.229270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.229276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.229562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.229568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.229880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.229886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.230117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.230124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.230484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.230491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.230823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.230831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.231164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.231171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.231495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.231501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.231860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.231866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.232225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.232235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.232596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.232602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.232916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.232923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.233253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.233260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.233581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.233587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.233899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.233906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.234222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.234231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.234559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.234566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.234877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.234884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.235196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.235202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.235558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.235565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.235904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.235911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.236142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.236149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.236483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.236490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.236809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.236815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.237128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.237134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.237462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.237469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.237826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.237832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.238191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.238199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.238514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.238520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.238751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.238757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.239069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.239076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.239413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.239420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.239748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.239754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.239991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.239997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.240318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.240324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.240681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.240688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.241018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.241025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.241347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.241354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.241558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.241565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.241894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.025 [2024-07-12 01:56:39.241901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.025 qpair failed and we were unable to recover it. 00:38:13.025 [2024-07-12 01:56:39.242256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.242263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.242662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.242669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.242948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.242954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.243276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.243282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.243582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.243588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.243941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.243948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.244343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.244350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.244663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.244671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.245003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.245010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.245207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.245214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.245541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.245549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.245900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.245907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.246315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.246322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.246595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.246602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.246898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.246904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.247227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.247236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.247556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.247563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.247963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.247970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.248299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.248306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.248656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.248663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.249013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.249020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.249351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.249358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.249679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.249686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.249891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.249898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.250243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.250249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.250597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.250604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.250864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.250870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.251216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.251224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.251550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.251557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.251891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.251898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.252223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.252233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.252549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.252555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.252872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.252879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.253239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.253246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.253468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.253474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.253819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.253826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.254141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.254148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.254485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.254491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.254810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.254816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.255172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.255178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.026 [2024-07-12 01:56:39.255500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.026 [2024-07-12 01:56:39.255506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.026 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.255833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.255839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.256144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.256150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.256378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.256386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.256718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.256725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.257081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.257088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.257375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.257382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.257528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.257535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.257867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.257875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 74616 Killed "${NVMF_APP[@]}" "$@" 00:38:13.027 [2024-07-12 01:56:39.258243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.258250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.258550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.258557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.258886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:13.027 [2024-07-12 01:56:39.258893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.259214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.259221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:13.027 [2024-07-12 01:56:39.259585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.259593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:13.027 [2024-07-12 01:56:39.259880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.259889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.027 [2024-07-12 01:56:39.260208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.260216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.260573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.260581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.260916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.260923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.261311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.261318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.261651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.261658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.262019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.262026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.262368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.262375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.262688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.262695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.263014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.263021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.263409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.263417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.263764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.263771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.264103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.264109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.264439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.264447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.264840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.264847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.265085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.265092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.265440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.265447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.265807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.265814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.266052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.266059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.266407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.266415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.266721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.266728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.266925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.266933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 [2024-07-12 01:56:39.267235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.267242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=75629 00:38:13.027 [2024-07-12 01:56:39.267686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.027 [2024-07-12 01:56:39.267694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.027 qpair failed and we were unable to recover it. 00:38:13.027 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 75629 00:38:13.027 [2024-07-12 01:56:39.268031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.268039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 75629 ']' 00:38:13.028 [2024-07-12 01:56:39.268364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.268372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.028 [2024-07-12 01:56:39.268568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.268576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:13.028 [2024-07-12 01:56:39.268909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.268917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.028 [2024-07-12 01:56:39.269119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.269126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:13.028 01:56:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.028 [2024-07-12 01:56:39.269443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.269451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.269859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.269866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.270053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.270060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.270399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.270407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.270705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.270713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.271053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.271061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.271410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.271417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.271770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.271777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.272134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.272142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.272501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.272509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.272843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.272850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.272964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.272971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.273212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.273220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.273350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.273357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.273663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.273671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.274008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.274016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.274369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.274377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.274742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.274751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.275076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.275083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.275320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.275328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.275671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.275678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.275845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.275853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.276078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.276086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.276395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.276402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.276612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.276619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.276944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.276951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.277280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.028 [2024-07-12 01:56:39.277288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.028 qpair failed and we were unable to recover it. 00:38:13.028 [2024-07-12 01:56:39.277706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.277713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.277924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.277932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.278261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.278268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.278467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.278474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.278815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.278822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.279175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.279182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.279523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.279529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.279765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.279772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.280115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.280121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.280448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.280455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.280764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.280770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.281089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.281095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.281331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.281338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.281670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.281677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.281993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.282000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.282324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.282331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.282542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.282549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.282946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.282953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.283288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.283296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.283643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.283870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.283877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.284212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.284218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.284430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.284437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.284640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.284646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.284934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.284941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.285263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.285270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.285616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.285623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.285987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.285993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.286307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.286315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.286726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.286732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.287043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.287050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.287360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.287367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.287703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.287709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.287749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.287755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.288075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.288081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.288290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.288297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.288650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.288657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.289007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.289014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.289269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.289276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.289608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.289614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.029 qpair failed and we were unable to recover it. 00:38:13.029 [2024-07-12 01:56:39.289955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.029 [2024-07-12 01:56:39.289962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.290274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.290281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.290621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.290628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.290952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.290958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.291281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.291288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.291653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.291660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.291785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.291792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Read completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 Write completed with error (sct=0, sc=8) 00:38:13.030 starting I/O failed 00:38:13.030 [2024-07-12 01:56:39.292058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:13.030 [2024-07-12 01:56:39.292303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.292322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118c0a0 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.292761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.292798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118c0a0 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.293120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.293129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.293481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.293489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.293806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.293813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.294029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.294037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.294408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.294416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.294730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.294738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.295058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.295065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.295407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.295414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.295608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.295615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.296017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.296025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.296360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.296367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.296740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.296746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.296921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.296928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.297289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.297296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.297706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.297713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.298057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.298063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.298467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.298474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.298797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.298804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.299130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.299136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.299334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.299349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.299556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.030 [2024-07-12 01:56:39.299563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.030 qpair failed and we were unable to recover it. 00:38:13.030 [2024-07-12 01:56:39.299776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.299783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.299845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.299853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.300180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.300186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.300377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.300384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.300691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.300698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.301040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.301046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.301393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.301400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.301726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.301732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.302081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.302087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.302284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.302291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.302703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.302709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.303061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.303067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.303377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.303384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.303712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.303720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.304025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.304033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.304371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.304378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.304597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.304604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.304969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.304976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.305310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.305317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.305534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.305542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.305929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.305938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.306308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.306315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.306646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.306652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.306969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.306975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.307286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.307293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.307459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.307466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.307853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.307860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.308224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.308234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.308547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.308554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.308871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.308879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.309213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.309221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.309562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.309570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.309923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.309929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.310132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.310139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.310574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.310581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.310900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.310907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.311236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.311243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.311452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.311459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.311773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.311780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.312217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.312224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.312614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.031 [2024-07-12 01:56:39.312621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.031 qpair failed and we were unable to recover it. 00:38:13.031 [2024-07-12 01:56:39.312989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.312996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.313328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.313336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.313557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.313563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.313803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.313809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.314203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.314209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.314586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.314593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.314788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.314795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.315154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.315161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.315511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.315518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.315865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.315871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.316210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.316216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.316615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.316623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.316953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.316960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.317304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.317311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.317661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.317668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.318008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.318014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.318426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.318433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.318750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.318757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.319102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.319109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.319424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.319434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.319773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.319781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.320143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.320150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.320466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.320473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.320816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.320823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.321161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.321168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.321374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.321382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.321665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.321671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.322003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.322010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.322026] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:13.032 [2024-07-12 01:56:39.322070] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.032 [2024-07-12 01:56:39.322372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.322379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.322720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.322726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.323059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.323066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.323262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.323269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.323318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.323326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.323636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.323643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.032 qpair failed and we were unable to recover it. 00:38:13.032 [2024-07-12 01:56:39.323982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.032 [2024-07-12 01:56:39.323989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.324329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.324337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.324690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.324698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.325045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.325052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.325263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.325271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.325442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.325449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.325813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.325820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.326176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.326183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.326514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.326522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.326929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.326936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.327261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.327268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.327603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.327610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.327949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.327956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.328351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.328359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.328713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.328720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.329054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.329062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.329249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.329257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.329581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.329588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.329921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.329928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.330298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.330306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.330504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.330511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.330724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.330731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.331016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.331023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.331246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.331253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.331469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.331478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.331855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.331862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.332059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.332066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.332423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.332431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.332659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.332666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.332874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.332882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.333219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.333226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.333592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.333599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.333842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.333849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.334186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.334194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.334540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.334547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.334906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.334912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.335239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.335246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.335648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.335655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.335824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.335831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.336170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.336176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.033 qpair failed and we were unable to recover it. 00:38:13.033 [2024-07-12 01:56:39.336512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.033 [2024-07-12 01:56:39.336518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.336846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.336853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.337186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.337193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.337405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.337412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.337720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.337727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.338059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.338066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.338401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.338408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.338735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.338741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.339059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.339066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.339379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.339385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.339719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.339726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.339915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.339922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.340281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.340288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.340627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.340634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.340959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.340965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.341303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.341309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.341510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.341517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.341734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.341741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.342128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.342135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.342364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.342371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.342592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.342599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.342782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.342789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.343190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.343196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.343518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.343525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.343886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.343894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.344170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.344177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.344501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.344508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.344827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.344834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.345157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.345164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.345358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.345366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.345697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.345705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.346078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.346085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.346412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.346419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.346774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.346781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.346950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.346957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.347298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.347305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.347637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.347644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.347957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.347963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.348157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.348165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.348498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.348504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.348834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.034 [2024-07-12 01:56:39.348840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.034 qpair failed and we were unable to recover it. 00:38:13.034 [2024-07-12 01:56:39.349167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.349175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.349511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.349519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.349716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.349724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.349917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.349924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.350272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.350279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.350494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.350500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.350810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.350816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.351162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.351169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.351382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.351389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.351699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.351706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.351899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.351907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.352092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.352099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.035 [2024-07-12 01:56:39.352430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.035 [2024-07-12 01:56:39.352437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.035 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.352768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.352776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.353090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.353097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.353466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.353473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.353700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.353707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.354050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.354056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.354380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.354387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.354727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.354734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.355059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.355067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.355400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.355407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.355634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.355641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.355963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.355972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.356307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.356314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.356734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.356741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.357108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.357114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.357526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.357533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.357841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.357847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.358158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.358164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.358552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.358559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.310 [2024-07-12 01:56:39.358919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.358927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.359122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.359129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.359425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.359433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.359771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.359778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.360018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.360025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.360366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.360373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.310 qpair failed and we were unable to recover it. 00:38:13.310 [2024-07-12 01:56:39.360705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.310 [2024-07-12 01:56:39.360712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.361113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.361120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.361416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.361423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.361737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.361743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.362053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.362060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.362365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.362372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.362714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.362720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.363033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.363039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.363373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.363380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.363709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.363716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.364026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.364033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.364369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.364376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.364719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.364726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.364920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.364928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.365281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.365288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.365587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.365594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.365915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.365921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.366296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.366303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.366634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.366641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.366813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.366820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.367077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.367083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.367501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.367508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.367839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.367846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.368187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.368193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.368394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.368401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.368757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.368763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.369087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.369096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.369361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.369368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.369716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.369722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.370038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.370045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.370375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.370382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.370722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.370729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.371014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.371022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.371377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.371385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.371729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.371737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.371989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.371996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.372333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.372339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.372684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.372691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.373021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.373027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.311 qpair failed and we were unable to recover it. 00:38:13.311 [2024-07-12 01:56:39.373260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.311 [2024-07-12 01:56:39.373267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.373521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.373528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.373840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.373846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.374246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.374253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.374595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.374601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.374965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.374972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.375337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.375344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.375659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.375666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.376033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.376040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.376290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.376297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.376591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.376598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.376762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.376777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.377124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.377131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.377429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.377435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.377792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.377798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.378116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.378123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.378518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.378524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.378702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.378708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.379112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.379119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.379529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.379536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.379857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.379863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.380114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.380121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.380360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.380367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.380718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.380724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.381113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.381120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.381469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.381476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.381834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.381841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.382181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.382195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.382525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.382532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.382734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.382740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.382961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.382967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.383162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.383169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.383348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.383355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.383791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.383797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.384117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.384124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.384471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.384477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.384798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.384805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.384995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.385001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.385335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.385343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.385670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.385677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.385990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.312 [2024-07-12 01:56:39.385997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.312 qpair failed and we were unable to recover it. 00:38:13.312 [2024-07-12 01:56:39.386294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.386301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.386622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.386628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.386965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.386972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.387315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.387321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.387663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.387669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.388022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.388029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.388406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.388413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.388738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.388746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.389078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.389085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.389450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.389457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.389791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.389797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.390010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.390017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.390262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.390269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.390679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.390685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.390980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.390986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.391338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.391345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.391624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.391630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.391959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.391965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.392294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.392301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.392623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.392629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.392956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.392964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.393206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.393213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.393405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.393413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.393706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.393714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.394050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.394057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.394429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.394436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.394709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.394717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.395065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.395071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.395405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.395412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.395578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.395585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.395962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.395968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.396171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.396178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.396531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.396538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.396868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.396876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.397203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.397211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.397552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.397559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.397779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.397785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.398163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.398169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.398512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.398519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.313 [2024-07-12 01:56:39.398698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.313 [2024-07-12 01:56:39.398704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.313 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.398950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.398957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.399320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.399327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.399569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.399575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.399940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.399948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.400286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.400293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.400607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.400613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.400779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.400785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.400974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.400980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.401324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.401330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.401669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.401676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.402004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.402012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.402371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.402378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.402720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.402727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.402939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.402948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.403256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.403264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.403603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.403610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.403932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.403938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.404136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.404142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.404550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.404556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.404883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.404890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.405241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.405248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.405555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.405561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.405770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.405776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.406084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.406091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.406423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.406431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.406748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.406754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.406996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.407004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.407273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.407280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.407679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.407686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.407996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.408002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.408322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.408328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.314 [2024-07-12 01:56:39.408563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.314 [2024-07-12 01:56:39.408570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.314 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.408888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.408895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.409186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.409193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.409522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.409529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.409846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.409853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.410193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.410199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.410410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.410418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.410660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.410667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.410976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.410982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.411323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.411330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.411659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.411665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.411982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.411989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.412307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.412314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.412476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.412483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.412777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.412783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.413113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.413119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.413429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.413437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.413649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.413656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.414001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.414008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.414331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.414337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.414677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.414684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.415050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.415056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.415289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.415297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.415620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.415626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.415929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.415936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.416126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.416134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.416455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.416462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.416858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.416865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.417078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.417085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.417432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.417439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.417593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:13.315 [2024-07-12 01:56:39.417766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.417774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.418099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.418106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.418481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.418488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.418698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.418705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.419084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.419091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.419464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.419472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.419793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.419800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.420115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.420121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.420468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.420476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.420738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.420745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.420965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.315 [2024-07-12 01:56:39.420971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.315 qpair failed and we were unable to recover it. 00:38:13.315 [2024-07-12 01:56:39.421273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.421280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.421524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.421531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.421912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.421919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.422243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.422251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.422646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.422653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.422824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.422832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.423164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.423172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.423542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.423551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.423788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.423795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.424137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.424144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.424397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.424404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.424742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.424750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.425058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.425065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.425399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.425406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.425805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.425813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.426115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.426123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.426429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.426437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.426811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.426817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.427033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.427040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.427362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.427369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.427697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.427705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.428109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.428117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.428438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.428445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.428757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.428765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.429096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.429104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.429457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.429464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.429714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.429722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.430073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.430081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.430319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.430327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.430661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.430668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.431035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.431042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.431370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.431378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.431725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.431732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.432049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.432056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.432259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.432267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.432614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.432622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.432948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.432955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.433315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.433322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.433648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.433656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.433976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.433984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.434349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.316 [2024-07-12 01:56:39.434356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.316 qpair failed and we were unable to recover it. 00:38:13.316 [2024-07-12 01:56:39.434680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.434687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.435044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.435053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.435397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.435405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.435654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.435662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.436006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.436013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.436336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.436344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.436647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.436657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.437010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.437016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.437368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.437376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.437666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.437674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.437845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.437851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.438249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.438256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.438556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.438563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.438929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.438937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.439164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.439172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.439505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.439512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.439842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.439849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.440198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.440204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.440489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.440496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.440710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.440717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.441077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.441085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.441323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.441330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.441640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.441648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.441990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.441997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.442358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.442365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.442785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.442792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.443125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.443132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.443470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.443478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.443792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.443800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.444158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.444165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.444500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.444507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.444701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.444709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.445105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.445112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.445435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.445443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.445769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.445776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.446093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.446100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.446288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.446296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.446685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.446691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.447016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.447023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.447367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.447373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.317 qpair failed and we were unable to recover it. 00:38:13.317 [2024-07-12 01:56:39.447773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.317 [2024-07-12 01:56:39.447780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.448132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.448140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.448485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.448492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.448850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.448856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.449211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.449218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.449577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.449585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.449936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.449944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.449946] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.318 [2024-07-12 01:56:39.449973] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.318 [2024-07-12 01:56:39.449980] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.318 [2024-07-12 01:56:39.449987] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.318 [2024-07-12 01:56:39.449993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.318 [2024-07-12 01:56:39.450264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.450272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.450479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.450487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.450535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:38:13.318 [2024-07-12 01:56:39.450814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.450822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.450740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:38:13.318 [2024-07-12 01:56:39.450872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:38:13.318 [2024-07-12 01:56:39.450872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:38:13.318 [2024-07-12 01:56:39.451236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.451244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.451565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.451572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.451909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.451917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.452292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.452300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.452654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.452661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.452983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.452989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.453315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.453322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.453655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.453662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.454002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.454010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.454275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.454282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.454667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.454674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.454902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.454908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.455250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.455257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.455607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.455614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.455949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.455955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.456363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.456370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.456613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.456621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.456804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.456810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.457191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.457198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.457519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.457527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.457851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.457858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.458195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.458202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.458427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.458434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.458689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.458696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.459026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.459033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.459380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.459387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.318 qpair failed and we were unable to recover it. 00:38:13.318 [2024-07-12 01:56:39.459701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.318 [2024-07-12 01:56:39.459708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.459789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.459796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.460035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.460042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.460248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.460257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.460563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.460570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.460904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.460911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.461262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.461269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.461602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.461610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.461933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.461940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.462314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.462322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.462634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.462641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.462949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.462956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.463292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.463299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.463653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.463659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.463990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.463997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.464314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.464321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.464664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.464670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.464983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.464990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.465294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.465302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.465613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.465620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.465838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.465851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.466082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.466089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.466447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.466456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.466778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.466786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.467148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.467156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.467487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.467494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.467828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.467835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.468066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.468073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.468365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.468371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.468698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.468705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.469018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.469025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.469213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.469223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.469458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.469466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.469762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.469770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.470132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.470139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.470476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.470484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.319 qpair failed and we were unable to recover it. 00:38:13.319 [2024-07-12 01:56:39.470696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.319 [2024-07-12 01:56:39.470703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.471049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.471057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.471367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.471374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.471719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.471726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.471932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.471938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.472283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.472291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.472523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.472531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.472866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.472873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.473076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.473083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.473423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.473430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.473599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.473606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.474012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.474021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.474341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.474348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.474679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.474686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.475041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.475048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.475377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.475386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.475607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.475614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.475823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.475829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.476048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.476055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.476391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.476398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.476791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.476799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.477103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.477109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.477450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.477457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.477795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.477802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.478011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.478017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.478242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.478249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.478484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.478491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.478822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.478829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.479171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.479178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.479441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.479449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.479815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.479822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.480007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.480013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.480217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.480223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.480623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.480630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.480948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.480955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.481258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.481265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.481464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.481470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.481787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.481794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.482108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.482114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.482470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.482477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.482816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.320 [2024-07-12 01:56:39.482823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.320 qpair failed and we were unable to recover it. 00:38:13.320 [2024-07-12 01:56:39.483021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.483028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.483427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.483434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.483767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.483774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.484133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.484140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.484474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.484481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.484529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.484537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.484843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.484850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.485228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.485239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.485437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.485444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.485757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.485764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.486117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.486127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.486483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.486490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.486839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.486846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.487214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.487221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.487529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.487536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.487864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.487870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.488189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.488195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.488520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.488528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.488882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.488889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.489247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.489255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.489582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.489589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.489912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.489918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.490275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.490282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.490325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.490332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.490522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.490529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.490737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.490744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.491151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.491157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.491475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.491482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.491812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.491819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.491890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.491896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.492210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.492216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.492539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.492546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.492903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.492910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.493020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.493027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.493350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.493357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.493716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.493722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.494044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.494051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.494381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.494388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.494793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.494800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.495027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.321 [2024-07-12 01:56:39.495033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.321 qpair failed and we were unable to recover it. 00:38:13.321 [2024-07-12 01:56:39.495215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.495222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.495422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.495428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.495773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.495779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.496018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.496025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.496356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.496364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.496567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.496578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.496768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.496775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.496989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.496996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.497336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.497343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.497513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.497520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.497685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.497694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.498035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.498041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.498372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.498379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.498654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.498661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.498901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.498908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.499242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.499249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.499557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.499563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.499891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.499898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.500226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.500236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.500581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.500588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.500948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.500954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.501007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.501013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.501331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.501337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.501678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.501684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.501895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.501902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.502106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.502113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.502464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.502471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.502792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.502798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.503003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.503009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.503208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.503214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.503612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.503618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.504007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.504235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.504241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.504571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.504577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.504896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.504903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.505214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.505222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.505385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.505392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.505609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.505616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.505975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.505982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.506181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.506189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.322 qpair failed and we were unable to recover it. 00:38:13.322 [2024-07-12 01:56:39.506529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.322 [2024-07-12 01:56:39.506536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.506855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.506862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.507173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.507180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.507574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.507580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.507901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.507907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.508234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.508241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.508451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.508457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.508622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.508629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.508949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.508956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.509298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.509305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.509651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.509659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.509973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.509979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.510306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.510313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.510496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.510502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.510745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.510752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.510949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.510955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.511294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.511301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.511645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.511651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.511963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.511970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.512329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.512336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.512507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.512514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.512928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.512935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.513247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.513254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.513471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.513478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.513687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.513694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.514009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.514015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.514406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.514413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.514577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.514584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.514959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.514965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.515323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.515330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.515645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.515651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.515850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.515857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.516277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.516284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.516510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.516517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.516936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.323 [2024-07-12 01:56:39.516942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.323 qpair failed and we were unable to recover it. 00:38:13.323 [2024-07-12 01:56:39.517175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.517181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.517502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.517508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.517921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.517928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.518256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.518263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.518591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.518597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.518783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.518789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.518948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.518954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.519262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.519269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.519522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.519529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.519870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.519877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.520081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.520088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.520412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.520419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.520659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.520666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.520833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.520839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.521021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.521028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.521245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.521254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.521626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.521632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.521950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.521957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.522198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.522206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.522548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.522555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.522785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.522793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.522979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.522987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.523163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.523170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.523529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.523536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.523853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.523859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.524101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.524108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.524447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.524453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.524779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.524786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.525161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.525167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.525553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.525560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.525786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.525792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.526079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.526085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.526316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.526323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.526540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.526546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.526870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.526876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.527188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.527195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.527251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.527258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.527568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.527574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.527896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.527903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.528105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.528112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.324 [2024-07-12 01:56:39.528477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.324 [2024-07-12 01:56:39.528484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.324 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.528803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.528810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.529134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.529140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.529513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.529520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.529715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.529722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.530052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.530059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.530417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.530425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.530751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.530757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.531081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.531088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.531400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.531407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.531727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.531733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.532048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.532054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.532385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.532392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.532602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.532608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.532847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.532854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.533055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.533064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.533357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.533364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.533559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.533566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.533736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.533742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.533939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.533946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.534223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.534233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.534561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.534568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.534939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.534946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.535259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.535266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.535559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.535565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.535889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.535897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.536273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.536280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.536604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.536610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.536923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.536929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.537249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.537255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.537643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.537650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.537973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.537979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.538350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.538357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.538704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.538995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.539001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.539344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.539352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.539653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.539660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.539884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.539891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.540199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.540205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.540529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.540536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.540854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.540861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.325 qpair failed and we were unable to recover it. 00:38:13.325 [2024-07-12 01:56:39.541216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.325 [2024-07-12 01:56:39.541223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.541560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.541567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.541774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.541781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.542102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.542108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.542288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.542295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.542479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.542486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.542802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.542809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.543143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.543150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.543526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.543533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.543857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.543864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.544224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.544233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.544629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.544636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.544996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.545003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.545320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.545327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.545484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.545491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.545782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.545788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.545978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.545984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.546335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.546341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.546513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.546519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.546810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.546816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.547155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.547162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.547369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.547376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.547700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.547707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.548073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.548080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.548272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.548279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.548499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.548506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.548816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.548822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.549019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.549026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.549367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.549374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.549718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.549725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.550059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.550066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.550425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.550431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.550618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.550625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.551037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.551043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.551181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.551187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.551526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.551533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.551859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.551866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.552064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.552070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.552240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.552248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.552470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.552476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.552647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.326 [2024-07-12 01:56:39.552653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.326 qpair failed and we were unable to recover it. 00:38:13.326 [2024-07-12 01:56:39.553028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.553035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.553360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.553367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.553556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.553564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.553869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.553875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.554241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.554248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.554626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.554633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.554952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.554958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.555167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.555174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.555501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.555508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.555830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.555836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.556218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.556224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.556568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.556575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.556774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.556781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.556994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.557002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.557321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.557327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.557743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.557749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.558060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.558067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.558255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.558263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.558574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.558581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.558771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.558778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.559082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.559089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.559418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.559425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.559612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.559619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.559896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.559903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.560274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.560281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.560632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.560638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.560955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.560961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.561273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.561280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.561441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.561448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.561736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.561742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.561936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.561943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.562304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.562311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.562638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.562644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.563033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.563039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.327 [2024-07-12 01:56:39.563362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.327 [2024-07-12 01:56:39.563369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.327 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.563579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.563586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.563763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.563769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.564104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.564110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.564516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.564522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.564830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.564836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.565041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.565048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.565360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.565367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.565709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.565716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.566121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.566127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.566475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.566481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.566680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.566687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.566877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.566883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.567211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.567217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.567546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.567553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.567875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.567881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.568235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.568242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.568607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.568614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.568666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.568672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.568976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.568984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.569321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.569328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.569667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.569674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.570012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.570018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.570218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.570225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.570404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.570410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.570684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.570691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.571005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.571012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.571387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.571393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.571709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.571715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.571900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.571907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.572072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.572079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.572486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.572492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.572915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.572922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.573108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.573115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.573423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.573430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.573601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.573608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.574020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.574027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.574372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.574379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.574739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.574745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.575064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.575070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.575406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.575412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.328 [2024-07-12 01:56:39.575605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.328 [2024-07-12 01:56:39.575611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.328 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.575983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.575990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.576173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.576180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.576573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.576579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.576774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.576780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.576832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.576839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.577015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.577022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.577322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.577329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.577666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.577673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.578027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.578035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.578277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.578284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.578631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.578637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.578833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.578839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.579035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.579042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.579285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.579292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.579634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.579640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.579856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.579862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.580243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.580250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.580519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.580527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.580819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.580825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.581179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.581186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.581563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.581570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.581886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.581892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.582252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.582258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.582591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.582597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.582785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.582793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.582999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.583005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.583393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.583399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.583742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.583749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.583924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.583932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.584302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.584309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.584633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.584640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.584835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.584841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.585175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.585182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.585369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.585376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.585667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.585673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.586011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.586017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.586065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.586070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.586190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.586197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.586562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.586568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.586886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.329 [2024-07-12 01:56:39.586893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.329 qpair failed and we were unable to recover it. 00:38:13.329 [2024-07-12 01:56:39.587223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.587239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.587408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.587415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.587813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.587819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.588051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.588058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.588210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.588218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.588648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.588656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.588859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.588867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.589214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.589220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.589387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.589395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.589790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.589796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.590120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.590127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.590479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.590486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.590801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.590807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.590860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.590866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.591042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.591048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.591289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.591295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.591678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.591684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.592050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.592056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.592244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.592251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.592471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.592478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.592728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.592735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.593057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.593064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.593301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.593308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.593485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.593491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.593819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.593825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.594199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.594205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.594251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.594259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.594569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.594575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.594813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.594819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.595142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.595149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.595446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.595453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.595837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.595844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.596200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.596208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.596446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.596454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.596791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.596797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.597032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.597038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.597316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.597322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.597681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.597687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.598085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.598092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.598449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.598456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.598650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.330 [2024-07-12 01:56:39.598656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.330 qpair failed and we were unable to recover it. 00:38:13.330 [2024-07-12 01:56:39.598911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.598918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.599144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.599150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.599345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.599353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.599698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.599707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.600018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.600024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.600214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.600220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.600552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.600559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.600893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.600900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.601298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.601305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.601480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.601487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.601800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.601806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.602035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.602042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.602396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.602403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.602626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.602632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.602930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.602938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.603281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.603287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.603624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.603630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.603834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.603840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.604033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.604040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.604356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.604363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.604708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.604714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.605026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.605032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.605222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.605231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.605533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.605539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.605867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.605873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.606086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.606093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.606273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.606279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.606666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.606674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.606861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.606868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.607227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.607237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.607572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.607579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.607765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.607771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.608193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.608200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.608402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.608408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.608621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.608627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.331 qpair failed and we were unable to recover it. 00:38:13.331 [2024-07-12 01:56:39.608675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.331 [2024-07-12 01:56:39.608681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.609087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.609094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.609481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.609487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.609901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.609907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.610227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.610239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.610437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.610444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.610869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.610876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.611200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.611206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.611411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.611420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.611673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.611680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.611867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.611873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.612224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.612232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.612550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.612556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.612925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.612933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.613260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.613267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.613572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.613579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.613973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.613979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.614158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.614165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.614505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.614512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.614832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.614839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.615152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.615158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.615482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.615488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.615802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.615809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.616014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.616020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.616343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.616350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.616666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.616673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.617013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.617020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.617228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.617240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.617614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.617621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.617855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.617862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.618179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.618186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.618521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.618528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.618716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.618723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.618900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.618906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.619245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.619252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.619450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.619458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.619647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.619654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.619846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.619853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.620212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.620219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.332 [2024-07-12 01:56:39.620557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-07-12 01:56:39.620564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.332 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.620888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.620896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.621214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.621221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.621537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.621543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.621865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.621872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.622106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.622113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.622449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.622456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.622777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.622784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.623111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.623118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.623508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.623518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.623962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.623969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.624301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.624308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.624676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.624684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.625075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.625083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.625253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.625262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.625449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.625456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.625766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.625773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.626137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.626144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.626381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.626390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.626732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.626738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.627054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.627061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.627503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.627510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.627824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.627831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.628146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.628153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.628481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.628488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.628850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.628857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.629096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.629103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.629443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.629450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.629631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.629638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.629888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.629896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.630326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.630334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.630567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.630575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.630760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.630766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.631137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.631144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.631482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.631489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.631850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.631856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.632045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.632051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.632344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.632352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.632674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.632680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.633001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.633008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.633411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.633418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.333 qpair failed and we were unable to recover it. 00:38:13.333 [2024-07-12 01:56:39.633597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.333 [2024-07-12 01:56:39.633605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.633910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.633918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.634239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.634246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.634592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.634600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.634796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.634802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.634839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.634845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.635199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.635205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.635531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.635538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.635869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.635879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.636209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.636216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.636539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.636547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.636904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.636912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.637113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.637120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.637336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.637344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.637646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.637652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.637814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.637821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.638187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.638194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.638423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.638430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.638777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.638785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.639099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.639106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.639306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.639314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.639635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.639642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.639981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.639988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.640181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.640189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.640531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.640538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.640730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.640737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.641084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.641090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.641284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.641291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.641609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.641617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.641989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.641996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.642367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.642374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.642688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.642695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.643024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.643030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.643234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.643241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.643426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.643433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.643760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.643766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.643962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.643969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.644326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.644333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.644515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.644522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.644843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.644850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.645164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.645171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.334 qpair failed and we were unable to recover it. 00:38:13.334 [2024-07-12 01:56:39.645379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.334 [2024-07-12 01:56:39.645386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.645699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.645706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.646049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.646056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.646273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.646280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.646566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.646574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.646747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.646755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.647022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.647028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.647214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.647223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.647598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.647605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.647919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.647927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.648287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.648294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.648634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.648641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.648962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.648969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.649203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.649210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.649263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.649270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.649444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.649451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.649753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.649759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.650098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.650104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.650321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.650328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.650572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.650579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.650914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.650921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.651164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.651171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.651367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.651374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.651687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.651694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.335 [2024-07-12 01:56:39.652109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.335 [2024-07-12 01:56:39.652116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.335 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.652435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.652444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.652770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.652777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.653091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.653098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.653429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.653435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.653711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.653719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.654049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.654056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.654269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.654276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.654471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.654478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.654636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.654642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.654823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.654830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.655195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.655201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.655534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.655542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.655862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.655869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.656063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.656069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.656380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.656387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.656702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.656708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.657027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.657033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.657314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.657321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.657548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.657555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.657810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.657817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.658219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.658226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.658280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.658287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.658467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.658475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.658805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.658812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.659020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.659027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.659221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.659232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.659457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.659464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.659789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.659796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.660111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.660118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.660517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.660524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.660849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.660855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.661184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.661191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.661524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.661531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.661577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.661584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.661781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.661788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.662125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.609 [2024-07-12 01:56:39.662132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.609 qpair failed and we were unable to recover it. 00:38:13.609 [2024-07-12 01:56:39.662319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.662326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.662544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.662552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.662863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.662871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.663101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.663108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.663454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.663461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.663860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.663866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.664188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.664194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.664532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.664538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.664851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.664858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.665072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.665080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.665250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.665258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.665614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.665621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.665819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.665827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.666029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.666035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.666381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.666387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.666743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.666749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.667074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.667081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.667358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.667365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.667725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.667732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.668100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.668108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.668434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.668443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.668785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.668793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.669164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.669171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.669380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.669388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.669708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.669715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.669909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.669916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.670309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.670318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.670632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.670639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.671009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.671016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.671354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.671361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.671667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.671673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.671881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.671888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.672250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.672257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.672597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.672603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.672942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.672948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.673262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.673269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.673571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.673578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.673744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.610 [2024-07-12 01:56:39.673751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.610 qpair failed and we were unable to recover it. 00:38:13.610 [2024-07-12 01:56:39.674078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.674084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.674272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.674280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.674580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.674587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.674994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.675001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.675325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.675331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.675679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.675686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.676004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.676010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.676331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.676339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.676640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.676647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.676981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.676989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.677326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.677334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.677707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.677714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.678051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.678058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.678462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.678469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.678642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.678650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.678986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.678993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.679198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.679205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.679417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.679424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.679749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.679756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.680171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.680178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.680509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.680517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.680881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.680888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.681231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.681239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.681602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.681609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.681929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.681937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.682296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.682304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.682501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.682508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.682840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.682848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.683180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.683188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.683386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.683394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.683449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.683456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.683784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.683792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.684099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.684106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.684425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.684431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.684641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.684648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.684947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.684953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.685276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.685284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.611 [2024-07-12 01:56:39.685453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.611 [2024-07-12 01:56:39.685461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.611 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.685839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.685846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.686265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.686273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.686610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.686617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.686938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.686945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.687144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.687152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.687402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.687410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.687749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.687755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.688124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.688132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.688470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.688477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.688780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.688786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.689106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.689113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.689446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.689453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.689761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.689767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.689957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.689963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.690308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.690315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.690361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.690368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.690663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.690670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.691025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.691032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.691433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.691440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.691629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.691635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.691831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.691837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.692079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.692087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.692405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.692412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.692626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.692634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.692987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.692995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.693236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.693243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.693569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.693575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.693891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.693898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.694093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.694100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.694404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.694410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.694579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.694586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.694969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.694975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.695290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.695298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.695630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.695637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.696048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.696054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.696234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.696241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.696552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.696559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.696605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.696612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.696652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.696658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.696971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.696978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.612 [2024-07-12 01:56:39.697389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.612 [2024-07-12 01:56:39.697396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.612 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.697719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.697726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.698046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.698053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.698384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.698391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.698583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.698590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.698884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.698891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.699240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.699247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.699614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.699620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.699951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.699959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.700137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.700144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.700359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.700366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.700605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.700611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.700782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.700788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.701190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.701197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.701510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.701517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.701762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.701769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.702132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.702139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.702387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.702394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.702628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.702635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.702964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.702970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.703282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.703289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.703332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.703338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.703714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.703721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.703961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.703967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.704299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.704307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.704641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.704649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.704979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.704987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.705311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.613 [2024-07-12 01:56:39.705319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.613 qpair failed and we were unable to recover it. 00:38:13.613 [2024-07-12 01:56:39.705666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.705672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.705851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.705857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.706239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.706247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.706413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.706419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.706600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.706606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.706927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.706934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.707268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.707274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.707489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.707496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.707678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.707684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.707878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.707885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.708159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.708165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.708530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.708536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.708741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.708747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.709090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.709096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.709549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.709555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.709865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.709872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.710059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.710066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.710364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.710371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.710567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.710574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.710879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.710885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.711200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.711207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.711563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.711570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.711920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.711926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.712108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.712114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.712473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.712480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.712664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.712671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.712982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.712988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.713306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.713313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.713654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.713661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.713975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.713982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.714190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.714197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.714535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.714542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.714701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.714708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.714973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.714981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.715370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.715377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.715549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.715555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.715800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.715807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.716147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.716153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.716456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.614 [2024-07-12 01:56:39.716463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.614 qpair failed and we were unable to recover it. 00:38:13.614 [2024-07-12 01:56:39.716801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.716807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.717123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.717130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.717305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.717313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.717693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.717702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.718021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.718027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.718370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.718376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.718581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.718589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.718767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.718774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.718949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.718956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.719364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.719371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.719586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.719593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.719890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.719897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.720210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.720218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.720550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.720557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.721011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.721018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.721350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.721357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.721656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.721662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.721844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.721851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.722064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.722070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.722248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.722255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.722658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.722665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.722978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.722984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.723180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.723187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.723523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.723530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.723763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.723770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.724140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.724147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.724557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.724564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.724877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.724883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.725141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.725147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.725446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.725453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.725784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.725792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.725987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.725994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.726291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.726297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.726712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.726719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.727053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.727059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.727249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.727257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.727562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.727569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.727804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.727811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.728155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.615 [2024-07-12 01:56:39.728161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.615 qpair failed and we were unable to recover it. 00:38:13.615 [2024-07-12 01:56:39.728500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.728507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.728823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.728829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.729228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.729239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.729283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.729290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.729641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.729650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.729976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.729983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.730302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.730309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.730525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.730531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.730859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.730866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.731223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.731232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.731565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.731572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.731757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.731766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.732071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.732078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.732394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.732401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.732733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.732740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.733077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.733084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.733298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.733306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.733486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.733493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.733669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.733676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.733718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.733725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.734073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.734082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.734447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.734455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.734628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.734635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.735045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.735051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.735393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.735400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.735739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.735745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.736061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.736068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.736390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.736396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.736736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.736742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.737148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.737155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.737483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.737490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.737863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.737874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.738114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.738121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.738290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.738297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.738604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.738611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.616 [2024-07-12 01:56:39.738939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.616 [2024-07-12 01:56:39.738945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.616 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.739276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.739284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.739471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.739479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.739873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.739879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.740242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.740249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.740488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.740496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.740813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.740821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.741141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.741148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.741523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.741531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.741810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.741816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.742180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.742188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.742525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.742532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.742857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.742863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.743186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.743193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.743315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.743322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.743647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.743654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.743984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.743991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.744327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.744334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.744665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.744672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.744994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.745002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.745361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.745369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.745667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.745674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.746028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.746035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.746210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.746216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.746592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.746599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.746935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.746942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.747290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.747298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.747632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.747639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.747993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.748000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.748333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.748340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.748754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.748762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.748956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.748962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.749306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.749314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.749505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.749511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.749916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.749923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.750305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.750312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.750721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.750731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.751051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.751057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.751434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.751442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.751774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.751781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.752069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.752076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.617 [2024-07-12 01:56:39.752399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.617 [2024-07-12 01:56:39.752407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.617 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.752688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.752694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.753031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.753038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.753399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.753406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.753730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.753737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.753921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.753928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.754143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.754150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.754459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.754466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.754510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.754516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.754710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.754717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.754920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.754927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.755291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.755298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.755638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.755645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.755950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.755957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.756278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.756284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.756628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.756635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.756831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.756839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.757076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.757084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.757340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.757347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.757698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.757705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.758051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.758058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.758199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.758207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.758396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.758403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.758695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.758703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.758949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.758956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.759277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.759284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.759460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.759467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.759641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.759648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.760000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.760007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.760334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.760341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.760674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.760680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.761018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.761024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.618 [2024-07-12 01:56:39.761069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.618 [2024-07-12 01:56:39.761076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.618 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.761398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.761405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.761709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.761717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.761879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.761888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.762079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.762086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.762420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.762428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.762606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.762613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.762991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.762998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.763412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.763419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.763461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.763467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.763512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.763518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.763704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.763710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.764024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.764031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.764327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.764334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.764494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.764501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.764680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.764686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.764931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.764939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.765274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.765281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.765527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.765534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.765867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.765874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.766235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.766242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.766490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.766497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.766541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.766549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.766928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.766935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.767250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.767258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.767430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.767437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.767784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.767790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.768193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.768199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.768371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.768379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.768431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.768438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.768600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.768606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.768995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.769002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.769330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.769337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.769552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.769559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.769778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.769784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.769942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.769949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.770326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.770333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.770639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.770646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.770858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.770865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.771020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.771027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.771206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.771213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.619 [2024-07-12 01:56:39.771501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.619 [2024-07-12 01:56:39.771508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.619 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.771890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.771897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.772236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.772245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.772570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.772577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.772733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.772740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.773020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.773027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.773396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.773403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.773739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.773746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.774073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.774079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.774133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.774139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.774294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.774301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.774614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.774621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.774958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.774965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.775335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.775342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.775681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.775687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.776018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.776025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.776215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.776222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.776553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.776560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.776895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.776902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.777215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.777222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.777464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.777471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.777790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.777798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.777982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.777989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.778298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.778305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.778638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.778645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.778959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.778965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.779145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.779152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.779454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.779461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.779763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.779769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.779915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.779922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.780378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.780385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.780731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.780738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.781144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.781151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.781529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.781535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.781802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.781809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.781969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.781976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.782313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.782319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.782635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.782643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.783013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.783020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.620 [2024-07-12 01:56:39.783176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.620 [2024-07-12 01:56:39.783184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.620 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.783508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.783516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.783839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.783845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.784170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.784178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.784503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.784510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.784828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.784835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.785160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.785166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.785575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.785582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.785899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.785906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.786275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.786283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.786463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.786470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.786788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.786795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.787134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.787140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.787484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.787492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.787850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.787856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.788047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.788054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.788360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.788367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.788696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.788703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.788910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.788917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.789278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.789285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.789490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.789498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.789856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.789862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.790182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.790188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.790353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.790361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.790756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.790763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.791078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.791084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.791310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.791326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.791523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.791530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.791823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.791830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.792171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.792178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.792509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.792516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.792877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.792884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.793248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.793255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.793554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.793561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.793884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.793891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.794272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.794279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.794601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.794608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.794791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.794798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.795109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.795115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.795405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.795412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.795743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.795750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.621 qpair failed and we were unable to recover it. 00:38:13.621 [2024-07-12 01:56:39.795796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.621 [2024-07-12 01:56:39.795803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.796135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.796142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.796476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.796486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.796675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.796682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.797045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.797051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.797372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.797379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.797711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.797718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.797918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.797925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.798270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.798277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.798620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.798627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.798814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.798822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.799115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.799122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.799458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.799465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.799655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.799662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.799966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.799973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.800309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.800316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.800640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.800647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.800967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.800973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.801217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.801224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.801571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.801578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.801894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.801900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.802225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.802235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.802557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.802564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.802919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.802927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.803247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.803255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.803597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.803604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.803929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.803935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.804299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.804306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.804634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.804640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.804828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.804836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.805179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.805186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.805431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.805438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.805749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.805756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.806087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.806094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.806287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.622 [2024-07-12 01:56:39.806294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.622 qpair failed and we were unable to recover it. 00:38:13.622 [2024-07-12 01:56:39.806564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.806571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.806889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.806895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.807094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.807101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.807443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.807450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.807766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.807773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.808110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.808118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.808350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.808357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.808745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.808754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.809104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.809112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.809295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.809301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.809616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.809623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.809935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.809941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.810185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.810191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.810525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.810532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.810734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.810740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.811104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.811111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.811484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.811491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.811905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.811912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.812233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.812240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.812596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.812603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.812810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.812818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.813163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.813170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.813219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.813225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.813628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.813635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.813939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.813947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.814273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.814280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.814591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.814598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.814823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.814830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.814971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.814978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.815200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.815207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.815546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.815553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.815738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.815745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.816083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.816089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.816287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.816294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.816606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.816613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.816952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.816958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.817322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.817329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.817668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.817674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.817879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.817887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.818266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.818273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.818598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.623 [2024-07-12 01:56:39.818606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.623 qpair failed and we were unable to recover it. 00:38:13.623 [2024-07-12 01:56:39.818927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.818935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.819137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.819145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.819466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.819473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.819792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.819798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.820122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.820128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.820325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.820332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.820689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.820698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.821021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.821028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.821241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.821248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.821574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.821581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.821914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.821921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.822247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.822253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.822296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.822302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.822632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.822639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.822967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.822975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.823334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.823342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.823684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.823692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.823890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.823897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.824259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.824265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.824463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.824470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.824770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.824776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.825120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.825126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.825449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.825456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.825786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.825792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.826038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.826045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.826276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.826282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.826673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.826680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.827021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.827028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.827394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.827400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.827571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.827579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.827883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.827890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.828282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.828289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.828569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.828575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.828909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.828916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.829233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.829240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.829540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.829547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.829872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.829879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.830278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.830286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.830602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.830609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.830806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.830812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.624 [2024-07-12 01:56:39.831026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.624 [2024-07-12 01:56:39.831032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.624 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.831266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.831274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.831529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.831536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.831965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.831972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.832295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.832302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.832705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.832712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.832902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.832910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.833151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.833157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.833334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.833341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.833530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.833536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.833918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.833925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.834242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.834248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.834642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.834650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.834955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.834961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.835368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.835375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.835705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.835712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.836024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.836031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.836228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.836238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.836429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.836436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.836750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.836758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.836956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.836963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.837154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.837162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.837467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.837474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.837808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.837814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.838047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.838054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.838390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.838397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.838598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.838605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.838975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.838982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.839168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.839175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.839467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.839475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.839676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.839683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.839963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.839970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.840304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.840311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.840618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.840625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.840949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.840957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.841217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.841224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.841427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.841434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.841646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.841652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.841953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.841960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.842309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.842315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.842515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.842522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.625 [2024-07-12 01:56:39.842924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.625 [2024-07-12 01:56:39.842930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.625 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.843246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.843253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.843586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.843593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.843919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.843925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.844264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.844270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.844605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.844612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.844948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.844955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.845154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.845161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.845363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.845371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.845746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.845754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.845964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.845972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.846342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.846350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.846689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.846695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.847009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.847015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.847328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.847335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.847665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.847672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.847748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.847754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.847945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.847952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.848118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.848124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.848579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.848586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.848921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.848928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.849261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.849268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.849603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.849609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.849800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.849807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.850142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.850148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.850371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.850377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.850706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.850713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.850918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.850925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.851297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.851304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.851625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.851631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.851957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.851963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.852262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.852269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.852486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.852494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.852848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.852854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.852907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.852913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.626 [2024-07-12 01:56:39.853272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.626 [2024-07-12 01:56:39.853280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.626 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.853614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.853622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.853981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.853988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.854326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.854333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.854553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.854560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.854817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.854823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.855140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.855148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.855350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.855358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.855717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.855723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.855903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.855911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.856222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.856234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.856568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.856575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.856901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.856908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.857099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.857105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.857493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.857500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.857707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.857714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.858040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.858047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.858220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.858226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.858539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.858545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.858770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.858778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.859111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.859118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.859461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.859470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.859799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.859806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.860001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.860008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.860259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.860265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.860663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.860670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.860986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.860993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.861180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.861186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.861444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.861451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.861783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.861789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.862154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.862161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.862447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.862454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.862625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.862632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.862989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.862997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.863366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.863373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.863592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.627 [2024-07-12 01:56:39.863600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.627 qpair failed and we were unable to recover it. 00:38:13.627 [2024-07-12 01:56:39.863973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.863980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.864304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.864311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.864642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.864648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.864991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.864998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.865318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.865325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.865640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.865647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.866001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.866007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.866373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.866380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.866724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.866731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.866931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.866938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.867272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.867279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.867620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.867627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.867954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.867961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.868008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.868015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.868350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.868360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.868780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.868786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.868983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.868989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.869325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.869332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.869661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.869668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.869824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.869831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.870242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.870248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.870598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.870606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.870970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.870977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.871259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.871266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.871601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.871608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.871981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.871987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.872235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.872241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.872623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.872629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.872951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.872958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.873196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.873204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.873375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.873382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.873705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.873711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.873912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.873920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.874252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.874260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.874583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.874589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.628 [2024-07-12 01:56:39.874783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.628 [2024-07-12 01:56:39.874789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.628 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.875132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.875138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.875290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.875297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.875524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.875530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.875822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.875828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.876026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.876033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.876421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.876428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.876713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.876720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.877068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.877074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.877388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.877396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.877758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.877765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.877953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.877960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.878267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.878274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.878622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.878628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.878913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.878920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.879264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.879271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.879574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.879581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.879982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.879989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.880318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.880324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.880545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.880553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.880905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.880911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.881122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.881129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.881350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.881357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.881669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.881677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.882003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.882009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.882202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.882209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.882541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.882549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.882903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.882910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.883116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.883124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.883441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.883448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.883782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.883789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.884187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.884195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.884385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.884393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.884576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.884583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.884905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.884911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.885258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.885265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.885608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.885614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.885968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.885975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.886332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.886339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.629 qpair failed and we were unable to recover it. 00:38:13.629 [2024-07-12 01:56:39.886540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.629 [2024-07-12 01:56:39.886547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.886900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.886907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.887263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.887270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.887599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.887606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.887761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.887767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.887948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.887955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.888157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.888163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.888356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.888364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.888665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.888672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.888996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.889003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.889324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.889331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.889667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.889673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.890004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.890011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.890343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.890350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.890505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.890512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.890916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.890923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.891111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.891118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.891320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.891328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.891603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.891610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.891959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.891965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.892298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.892305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.892529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.892536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.892911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.892919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.893286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.893293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.893630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.893637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.893904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.893911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.894243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.894250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.894566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.894574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.894902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.894909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.895226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.895237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.895597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.895603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.895843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.895850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.896021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.896028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.896369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.896376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.896551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.896557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.896789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.630 [2024-07-12 01:56:39.896796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.630 qpair failed and we were unable to recover it. 00:38:13.630 [2024-07-12 01:56:39.897115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.897121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.897513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.897520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.897881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.897888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.898112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.898120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.898173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.898180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.898379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.898386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.898731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.898738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.898919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.898926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.899089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.899095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.899413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.899420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.899754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.899761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.900077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.900085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.900498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.900504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.900840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.900846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.901034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.901041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.901249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.901255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.901467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.901474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.901784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.901791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.902129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.902137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.902485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.902492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.902910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.902917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.903098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.903105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.903285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.903293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.903465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.903471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.903845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.903853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.904040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.904047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.904442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.904449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.904650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.904656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.904993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.904999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.905320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.905327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.631 [2024-07-12 01:56:39.905627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.631 [2024-07-12 01:56:39.905633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.631 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.905961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.905967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.906288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.906296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.906611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.906618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.906813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.906819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.907014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.907021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.907235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.907242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.907407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.907414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.907801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.907808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.908172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.908179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.908511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.908518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.908831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.908838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.909159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.909165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.909537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.909544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.909870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.909876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.910186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.910192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.910529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.910537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.910859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.910866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.910911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.910918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.911245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.911253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.911574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.911580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.911913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.911922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.912322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.912329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.912665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.912672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.912884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.912891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.913074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.913081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.913449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.913456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.913776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.913782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.913987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.913994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.914335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.914342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.914704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.914710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.914906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.914912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.915227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.915244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.915401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.915408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.915752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.915759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.916077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.632 [2024-07-12 01:56:39.916083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.632 qpair failed and we were unable to recover it. 00:38:13.632 [2024-07-12 01:56:39.916403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.916410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.916732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.916738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.917062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.917070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.917264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.917272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.917580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.917587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.917774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.917782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.918096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.918102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.918339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.918346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.918687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.918693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.918884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.918890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.919297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.919304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.919524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.919530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.919814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.919822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.920184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.920190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.920395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.920402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.920601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.920608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.920922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.920929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.921248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.921256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.921560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.921566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.921975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.921983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.922296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.922303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.922739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.922748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.923067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.923075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.923407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.923414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.923746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.923754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.924079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.924088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.924430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.924437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.924627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.924634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.924838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.924846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.925100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.925107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.925399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.925407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.925769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.925778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.926217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.926225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.926581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.926589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.926878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.926886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.927071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.927078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.927526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.927534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.633 qpair failed and we were unable to recover it. 00:38:13.633 [2024-07-12 01:56:39.927780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.633 [2024-07-12 01:56:39.927786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.928137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.928145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.928337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.928344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.928715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.928722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.929070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.929076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.929426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.929433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.929619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.929626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.930043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.930051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.930258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.930266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.930612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.930618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.930975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.930982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.931371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.931378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.931714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.931720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.931941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.931948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.932275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.932282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.932629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.932636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.932956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.932962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.933303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.933311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.933631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.933638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.933953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.933959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.934160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.934167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.934477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.934484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.934789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.934796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.935112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.935120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.935287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.935294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.935532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.935539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.935772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.935779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.936174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.936180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.936576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.936585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.936776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.936783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.936993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.937001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.937346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.937353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.937567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.937574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.937891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.937898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.938235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.938243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.938574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.938580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.634 qpair failed and we were unable to recover it. 00:38:13.634 [2024-07-12 01:56:39.938925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-12 01:56:39.938933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.939249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.939256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.939299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.939306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.939527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.939534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.939890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.939898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.940237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.940245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.940586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.940593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.940927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.940933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.941138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.941144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.941454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.941460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.941667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.941675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.942028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.942034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.942278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.942286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.942448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.942455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.942774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.942780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.943102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.943108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.943423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.943430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.943639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.943646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.943814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.943822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.944082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.944088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.944381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.944388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.944726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.944734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.945053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.945061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.945414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.945422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.945736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.945743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.946070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.946078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.946410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.946418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.946825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.946832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.946878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.946885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.947194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.947201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.947541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.947548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.947905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.947913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.948245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.948255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.948575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.948582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.948903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.948909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.949106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.949112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.949287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.949294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.949621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.949628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.635 [2024-07-12 01:56:39.949948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.635 [2024-07-12 01:56:39.949954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.635 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.950317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.950324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.950584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.950592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.950803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.950809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.951209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.951215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.951396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.951404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.951702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.951709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.952129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.952136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.952452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.952459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.636 [2024-07-12 01:56:39.952787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.636 [2024-07-12 01:56:39.952806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.636 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.953161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.953171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.953485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.953495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.953681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.953690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.953871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.953879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.954165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.954173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.954370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.954380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.954690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.954698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.955006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.955015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.955359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.955368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.955667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.955675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.956007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.956015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.956195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.956205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.956404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.956413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.956758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.956767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.956947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.956956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.957294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.957303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.957634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.957642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.958015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.958023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.958369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.958378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.958722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.958730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.958911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.958920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.959294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.959302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.959636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.959644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.959840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.959847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.960203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.910 [2024-07-12 01:56:39.960212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.910 qpair failed and we were unable to recover it. 00:38:13.910 [2024-07-12 01:56:39.960538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.960547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.960742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.960751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.961075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.961083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.961421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.961430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.961773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.961780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.961983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.961992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.962333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.962341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.962683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.962691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.963023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.963031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.963366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.963374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.963748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.963756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.964089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.964098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.964421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.964429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.964763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.964772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.965127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.965135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.965393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.965402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.965747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.965756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.966096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.966104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.966421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.966430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.966613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.966622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.966800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.966808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.966903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.966910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.967237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.967245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.967447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.967455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.967642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.967649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.967960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.967968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.968301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.968309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.968508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.968515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.968705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.968713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.969035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.969043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.969234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.969242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.969559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.969567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.969993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.970001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.970345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.970353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.970711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.970719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.971078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.971087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.971423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.971431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.971631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.971639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.971806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.971814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.972145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.972154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.911 [2024-07-12 01:56:39.972345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.911 [2024-07-12 01:56:39.972354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.911 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.972667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.972674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.973093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.973101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.973290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.973298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.973606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.973613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.973847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.973854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.974204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.974211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.974560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.974568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.974858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.974865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.975054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.975063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.975365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.975373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.975728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.975736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.976065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.976073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.976317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.976325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.976525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.976533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.976870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.976877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.977172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.977180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.977503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.977511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.977709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.977719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.978014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.978022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.978374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.978383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.978712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.978720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.979075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.979083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.979443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.979451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.979780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.979789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.980143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.980152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.980396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.980406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.980736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.980745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.981086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.981093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.981290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.981299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.981614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.981622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.981958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.981967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.982164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.982172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.982518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.982527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.982581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.982587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.982776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.982784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.983217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.983225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.983521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.983529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.983867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.983875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.984215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.984226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.984587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.984596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.912 qpair failed and we were unable to recover it. 00:38:13.912 [2024-07-12 01:56:39.984933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.912 [2024-07-12 01:56:39.984941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.985134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.985143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.985499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.985507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.985693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.985701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.986034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.986042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.986389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.986397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.986601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.986610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.986837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.986845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.987169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.987178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.987513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.987521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.987848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.987856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.988175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.988183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.988513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.988522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.988909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.988918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.989118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.989127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.989491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.989500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.989698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.989707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.990038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.990045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.990374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.990382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.990433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.990439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.990820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.990828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.991141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.991150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.991316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.991324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.991519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.991527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.991854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.991862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.992064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.992072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.992422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.992430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.992641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.992649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.992805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.992814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.993165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.993174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.993501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.993509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.993883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.993891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.994219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.994227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.994548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.994557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.994894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.994902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.995248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.995256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.995450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.995458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.995617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.995625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.995979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.995989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.996366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.913 [2024-07-12 01:56:39.996374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.913 qpair failed and we were unable to recover it. 00:38:13.913 [2024-07-12 01:56:39.996735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.996743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.996928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.996936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.997235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.997244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.997587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.997595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.997777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.997785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.998111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.998119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.998449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.998457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.998791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.998798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.999151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.999160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.999499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.999508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.999701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.999709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:39.999874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:39.999881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.000190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.000199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.000397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.000406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.000748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.000756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.001093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.001101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.001147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.001154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.001929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.001945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.002149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.002158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.002366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.002376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.002670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.002679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.003019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.003028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.003105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.003114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.003461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.003470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.003818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.003826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.914 [2024-07-12 01:56:40.004073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.914 [2024-07-12 01:56:40.004081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.914 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.004358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.004367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.004707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.004716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.004919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.004928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.005157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.005166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.005412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.005421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.005774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.005782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.006130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.006138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.006492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.006501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.006760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.006768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.007149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.007159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.007588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.007597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.007785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.007794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.007982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.007993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.008171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.008179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.008494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.008502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.008835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.008843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.009197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.009206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.009618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.009626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.009958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.009967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.010158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.010167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.010388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.010396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.010581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.010591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.010756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.010765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.011142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.011151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.011385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.011393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.011722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.011731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.012059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.012068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.012254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.012263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.012528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.012537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.012886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.012895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.013106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.013115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.013402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.013411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.013626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.013635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.013998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.014007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.014203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.014212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.014539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.014548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.014784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.014793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.015152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.015161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.015506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.915 [2024-07-12 01:56:40.015515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.915 qpair failed and we were unable to recover it. 00:38:13.915 [2024-07-12 01:56:40.015701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.015710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.015893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.015901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.016061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.016071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.016406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.016415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.016741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.016750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.017119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.017127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.017460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.017470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.017657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.017666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.017969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.017978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.018320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.018330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.018697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.018706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.019043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.019053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.019250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.019260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.019562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.019572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.019931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.019941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.020273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.020283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.020505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.020514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.020859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.020867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.021222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.021245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.021567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.021576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.021767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.021776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.021957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.021966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.022326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.022335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.022661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.022670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.022904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.022913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.023148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.023158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.023421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.023430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.023767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.023776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.024013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.024021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.024237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.024246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.024445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.024454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.024639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.024649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.024911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.024920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.025122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.025131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.025383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.025396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.025768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.025776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.025914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.025921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.026180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.026188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.026363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.026372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.916 qpair failed and we were unable to recover it. 00:38:13.916 [2024-07-12 01:56:40.026572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.916 [2024-07-12 01:56:40.026580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.026722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.026730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.026926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.026934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.027121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.027130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.027470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.027478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.027872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.027880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.028108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.028117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.028368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.028376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.028727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.028735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.028905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.028914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.029140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.029149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.029337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.029345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.029567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.029576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.029879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.029887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.029930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.029938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.030326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.030334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.030683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.030691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.031023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.031032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.031367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.031376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.031761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.031769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.032066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.032074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.032271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.032281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.032626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.032636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.032822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.032831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.033185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.033194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.033532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.033541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.033735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.033743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.034055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.034064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.034428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.034437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.034622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.034632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.034980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.034990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.035186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.035195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.035377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.035386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.035708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.035716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.035940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.035948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.036278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.036287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.036610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.036619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.036779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.036789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.036985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.036994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.037327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.917 [2024-07-12 01:56:40.037336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.917 qpair failed and we were unable to recover it. 00:38:13.917 [2024-07-12 01:56:40.037682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.037691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.037883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.037893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.038214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.038222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.038563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.038572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.038862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.038871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.039198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.039206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.039529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.039538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.039892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.039901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.040237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.040245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.040585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.040594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.040890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.040899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.041254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.041262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.041457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.041465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.041818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.041827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.042016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.042025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.042353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.042362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.042550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.042558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.042901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.042910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.043254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.043263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.043582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.043591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.043794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.043803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.044158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.044166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.044367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.044377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.044676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.044685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.045019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.045027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.045374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.045382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.045743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.045752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.045938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.045948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.046134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.046143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.046446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.046455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.046794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.046803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.047128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.047137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.047513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.047522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.047875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.047884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.048220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.048236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.048427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.048436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.048801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.048810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.049194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.049206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.049554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.049563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.918 qpair failed and we were unable to recover it. 00:38:13.918 [2024-07-12 01:56:40.049931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.918 [2024-07-12 01:56:40.049940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.050141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.050150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.050360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.050370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.050657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.050666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.050974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.050983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.051325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.051335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.051546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.051557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.051873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.051882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.052129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.052139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.052349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.052358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.052695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.052705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.053038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.053050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.053388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.053397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.053604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.053612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.053824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.053832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.054194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.054202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.054544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.054552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.054747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.054756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.054926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.054934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.055165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.055174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.055537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.055545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.055741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.055749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.055956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.055965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.056310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.056318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.056678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.056686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.056740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.056746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.057061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.057069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.057254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.057264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.057578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.057586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.057899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.057908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.058248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.058257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.058587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.058595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.919 [2024-07-12 01:56:40.058901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.919 [2024-07-12 01:56:40.058910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.919 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.059256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.059264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.059520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.059528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.059867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.059875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.060066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.060075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.060411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.060419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.060620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.060627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.060815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.060823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.061067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.061074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.061432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.061440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.061782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.061792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.062123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.062131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.062460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.062468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.062822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.062831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.063184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.063191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.063535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.063544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.063878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.063887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.064198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.064206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.064408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.064416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.064592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.064599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.064940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.064948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.065146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.065154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.065510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.065518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.065845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.065853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.066185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.066193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.066437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.066446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.066809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.066816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.066862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.066868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.067199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.067207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.067547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.067555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.067912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.067921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.068248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.068257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.068569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.068577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.068772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.068780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.069089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.069098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.069438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.069446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.069781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.069789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.069980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.069988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.070322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.920 [2024-07-12 01:56:40.070331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.920 qpair failed and we were unable to recover it. 00:38:13.920 [2024-07-12 01:56:40.070542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.070550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.070755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.070764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.071013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.071021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.071377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.071386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.071746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.071754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.071822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.071828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.072138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.072146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.072501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.072510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.072875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.072883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.073075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.073083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.073226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.073238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.073467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.073477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.073521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.073530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.073849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.073858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.074191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.074200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.074527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.074538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.074871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.074878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.075250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.075259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.075603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.075612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.075810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.075818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.076137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.076145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.076491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.076500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.076703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.076711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.077050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.077060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.077415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.077424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.077666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.077674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.077868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.077877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.078104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.078112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.078325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.078333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.078671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.078679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.078884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.078892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.079253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.079262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.079572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.079580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.079944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.079954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.080287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.080295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.080486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.080494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.080672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.080680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.081023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.081031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.081087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.921 [2024-07-12 01:56:40.081093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.921 qpair failed and we were unable to recover it. 00:38:13.921 [2024-07-12 01:56:40.081276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.081284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.081620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.081629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.081971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.081979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.082166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.082174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.082372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.082381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.082748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.082759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.082957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.082965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.083312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.083320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.083669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.083677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.083877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.083886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.084226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.084238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.084554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.084563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.084895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.084905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.085239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.085249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.085581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.085589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.085773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.085781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.086153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.086161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.086519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.086527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.086862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.086870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.087224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.087236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.087582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.087590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.087781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.087790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.088099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.088107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.088472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.088480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.088736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.088744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.089071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.089079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.089283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.089293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.089459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.089468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.089655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.089664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.089960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.089967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.090318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.090326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.090525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.090532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.090931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.090940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.091126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.091135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.091334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.091343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.091649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.091657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.091946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.091955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.092300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.092308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.092637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.922 [2024-07-12 01:56:40.092645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.922 qpair failed and we were unable to recover it. 00:38:13.922 [2024-07-12 01:56:40.092989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.092997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.093415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.093423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.093643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.093651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.093908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.093917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.094287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.094295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:13.923 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:38:13.923 [2024-07-12 01:56:40.094721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.094730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.094989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.094998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:13.923 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:13.923 [2024-07-12 01:56:40.095339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.095348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.095428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.095434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.923 [2024-07-12 01:56:40.095611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.095620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.095819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.095828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.096005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.096015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.096343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.096351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.096423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.096429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.096661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.096669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.096870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.096877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.097192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.097200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.097438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.097447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.097707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.097715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.097866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.097873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.098080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.098089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.098236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.098244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.098398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.098406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.098708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.098716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.099048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.099056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.099447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.099455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.099796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.099804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.100148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.100157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.100562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.100571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.100899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.100909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.101236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.101243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.101413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.923 [2024-07-12 01:56:40.101421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.923 qpair failed and we were unable to recover it. 00:38:13.923 [2024-07-12 01:56:40.101793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.101800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.102007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.102014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.102315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.102322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.102705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.102714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.102903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.102910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.103236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.103243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.103567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.103575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.103788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.103795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.104095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.104103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.104503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.104513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.104901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.104909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.105149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.105156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.105366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.105373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.105586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.105594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.105950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.105957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.106158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.106166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.106243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.106251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.106568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.106575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.106657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.106664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.107031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.107040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.107364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.107372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.107568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.107576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.107904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.107912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.108099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.108107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.108455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.108464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.108716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.108724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.109055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.109063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.109421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.109429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.109626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.109633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.109834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.109841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.109998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.110006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.110206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.110213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.110547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.110555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.110834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.110841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.111175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.111185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.111525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.111533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.111731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.111740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.112093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.112100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.112177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.112184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.924 qpair failed and we were unable to recover it. 00:38:13.924 [2024-07-12 01:56:40.112532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.924 [2024-07-12 01:56:40.112540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.112742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.112750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.113098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.113105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.113344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.113352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.113650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.113658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.113847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.113855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.114160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.114167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.114455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.114464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.114661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.114669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.115011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.115019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.115082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.115089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.115292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.115300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.115485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.115493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.115813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.115820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.116026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.116035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.116213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.116221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.116438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.116446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.116790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.116798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.117005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.117012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.117073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.117081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.117399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.117408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.117772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.117779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.117987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.117996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.118326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.118334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.118653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.118661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.118896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.118904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.119106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.119114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.119357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.119365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.119775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.119782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.119979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.119987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.120183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.120194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.120523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.120532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.120866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.120875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.121248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.121256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.121473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.121480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.121773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.121781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.121988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.121996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.122238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.122246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.122585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.122593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.122781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.925 [2024-07-12 01:56:40.122789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.925 qpair failed and we were unable to recover it. 00:38:13.925 [2024-07-12 01:56:40.123150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.123158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.123494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.123502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.123565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.123574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.123871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.123878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.124207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.124215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.124444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.124452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.124782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.124789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.125151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.125159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.125497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.125505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.125712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.125720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.125918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.125926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.126273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.126280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.126590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.126598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.126994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.127004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.127334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.127340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.127656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.127663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.127854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.127861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.128189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.128196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.128525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.128532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.128722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.128729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.128961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.128970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.129175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.129183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.129489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.129496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.129695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.129702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.129926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.129934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.130242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.130249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.130581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.130588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.130946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.130953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.131172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.131178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.131426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.131433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.131569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.131575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.131913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.131919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.132153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.132160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.132624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.132633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.132973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.132980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.133170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.133178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.133346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.133353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.133544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.133551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.133879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.133887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.134079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.926 [2024-07-12 01:56:40.134087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.926 qpair failed and we were unable to recover it. 00:38:13.926 [2024-07-12 01:56:40.134415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.134422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.134580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.134587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.134754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.134761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.927 [2024-07-12 01:56:40.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.135016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:13.927 [2024-07-12 01:56:40.135349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.135357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.135587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.135595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.927 [2024-07-12 01:56:40.135769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.135776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.927 [2024-07-12 01:56:40.136013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.136021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.136352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.136360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.136568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.136575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.136869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.136876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.136926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.136932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.137106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.137112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.137433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.137439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.137761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.137768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.137973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.137980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.138315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.138323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.138657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.138664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.138826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.138833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.139189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.139196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.139531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.139537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.139735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.139742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.140143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.140150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.140459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.140466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.140716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.140723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.141019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.141025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.141346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.141353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.141524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.141530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.141894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.141901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.142218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.142227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.142445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.142453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.142614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.142621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.142943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.142950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.143282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.143289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.143473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.143481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.143681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.143688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.143993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.144000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.927 [2024-07-12 01:56:40.144174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.927 [2024-07-12 01:56:40.144181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.927 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.144567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.144576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.144937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.144944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.145151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.145158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.145499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.145507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.145832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.145838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.146166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.146173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.146514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.146522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.146840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.146851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.147166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.147173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.147218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.147225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.147400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.147408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.147634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.147641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.147962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.147969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.148298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.148306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.148655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.148662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.149018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.149025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.149340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.149348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.149674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.149681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.149720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.149726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.150094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.150101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.150426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.150432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.150477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.150483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.150779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.150785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.151097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.151104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.151515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.151523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.151860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.151868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.152211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.152218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.152412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.152418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.152605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.152612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.152958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.152964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 Malloc0 00:38:13.928 [2024-07-12 01:56:40.153359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.153367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 [2024-07-12 01:56:40.153694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.928 [2024-07-12 01:56:40.153701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.928 qpair failed and we were unable to recover it. 00:38:13.928 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.928 [2024-07-12 01:56:40.154034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.154042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.154269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.154277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:13.929 [2024-07-12 01:56:40.154474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.154482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.929 [2024-07-12 01:56:40.154788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.154795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.929 [2024-07-12 01:56:40.155102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.155109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.155391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.155399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.155569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.155576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.155733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.155740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.155990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.155997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.156329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.156336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.156631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.156638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.156863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.156871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.157206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.157213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.157412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.157420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.157728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.157735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.158059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.158067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.158211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.158218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.158464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.158471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.158811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.158819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.159193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.159200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.159430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.159437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.159761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.159767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.159964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.159973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.160194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.160200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.160568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.160575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.160722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.929 [2024-07-12 01:56:40.160894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.160901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.161081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.161087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.161439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.161447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.161504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.161510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.161809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.161817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.162152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.162160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.162503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.162832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.162840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.163200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.163207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.163402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.163411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.163590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.163597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.163802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.163811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.164137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.164145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.164471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.164480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.929 qpair failed and we were unable to recover it. 00:38:13.929 [2024-07-12 01:56:40.164820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.929 [2024-07-12 01:56:40.164828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.165029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.165038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.165345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.165354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.165579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.165586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.165918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.165926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.166261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.166269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.166484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.166493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.166734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.166741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.167077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.167086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.167401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.167409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.167735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.167744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.168076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.168084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.168419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.168427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.168631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.168640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.168912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.168922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.169135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.169144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.169356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.169364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.169718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.169726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.930 [2024-07-12 01:56:40.170138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.170146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:13.930 [2024-07-12 01:56:40.170373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.170381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.930 [2024-07-12 01:56:40.170734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.170742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.930 [2024-07-12 01:56:40.171082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.171090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.171246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.171254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.171485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.171492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.171831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.171840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.172105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.172113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.172308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.172317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.172634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.172642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.172977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.172985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.173321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.173329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.173397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.173403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.173746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.173754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.173953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.173962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.174289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.174298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.174627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.174635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.174970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.174978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.175177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.175186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.175397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.175405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.930 [2024-07-12 01:56:40.175739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.930 [2024-07-12 01:56:40.175746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.930 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.176087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.176097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.176430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.176438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.176635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.176643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.176976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.176984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.177317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.177325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.177656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.177665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.177997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.178006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.178361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.178369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.178701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.178710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.178755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.178761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.179080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.179088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.179287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.179295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.179629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.179637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.179991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.179999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.180378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.180386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.180731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.180740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.181072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.181080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.181341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.181349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.181708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.181716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.931 [2024-07-12 01:56:40.182083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.182092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:13.931 [2024-07-12 01:56:40.182431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.182440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.931 [2024-07-12 01:56:40.182640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.182648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.931 [2024-07-12 01:56:40.182984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.182992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.183385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.183393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.183589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.183597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.183993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.184002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.184195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.184203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.184541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.184549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.184907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.184916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.185337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.185346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.185535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.185545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.185920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.185928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.186295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.186303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.186659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.186666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.186968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.186976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.187146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.187154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.187539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.187547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.187745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.187753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.188017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.931 [2024-07-12 01:56:40.188025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.931 qpair failed and we were unable to recover it. 00:38:13.931 [2024-07-12 01:56:40.188353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.188361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.188699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.188707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.189044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.189052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.189248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.189256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.189459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.189467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.189784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.189792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.189991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.189998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.190330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.190338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.190684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.190693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.191054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.191063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.191408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.191416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.191621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.191630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.191883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.191891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.192092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.192101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.192470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.192478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.192811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.192819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.193208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.193217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.193558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.193567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.193751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.193759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.932 [2024-07-12 01:56:40.193948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.193957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:13.932 [2024-07-12 01:56:40.194278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.194287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.932 [2024-07-12 01:56:40.194616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.194625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.932 [2024-07-12 01:56:40.194988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.194996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.195329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.195337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.195496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.195504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.195700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.195708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.195956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.195965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.196297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.196306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.196487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.196496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.196800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.196809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.197041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.197049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.197380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.197388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.197744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.932 [2024-07-12 01:56:40.197751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.932 qpair failed and we were unable to recover it. 00:38:13.932 [2024-07-12 01:56:40.198106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.198114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.198319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.198328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.198507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.198515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.198865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.198873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.199238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.199247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.199582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.199590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.199781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.199789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.199964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.199971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.200272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.200280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.200619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.200627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.200957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.933 [2024-07-12 01:56:40.200965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1df8000b90 with addr=10.0.0.2, port=4420 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.200992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.933 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.933 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:13.933 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.933 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.933 [2024-07-12 01:56:40.211564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:13.933 [2024-07-12 01:56:40.211638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:13.933 [2024-07-12 01:56:40.211654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:13.933 [2024-07-12 01:56:40.211660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.933 [2024-07-12 01:56:40.211666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:13.933 [2024-07-12 01:56:40.211682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.933 01:56:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 74941 00:38:13.933 [2024-07-12 01:56:40.221503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:13.933 [2024-07-12 01:56:40.221561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:13.933 [2024-07-12 01:56:40.221574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:13.933 [2024-07-12 01:56:40.221582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.933 [2024-07-12 01:56:40.221586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:13.933 [2024-07-12 01:56:40.221598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.231540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:13.933 [2024-07-12 01:56:40.231601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:13.933 [2024-07-12 01:56:40.231614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:13.933 [2024-07-12 01:56:40.231619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.933 [2024-07-12 01:56:40.231624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:13.933 [2024-07-12 01:56:40.231635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.241514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:13.933 [2024-07-12 01:56:40.241576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:13.933 [2024-07-12 01:56:40.241589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:13.933 [2024-07-12 01:56:40.241594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.933 [2024-07-12 01:56:40.241598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:13.933 [2024-07-12 01:56:40.241609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:13.933 qpair failed and we were unable to recover it. 00:38:13.933 [2024-07-12 01:56:40.251527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:13.933 [2024-07-12 01:56:40.251588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:13.933 [2024-07-12 01:56:40.251601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:13.933 [2024-07-12 01:56:40.251606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.933 [2024-07-12 01:56:40.251610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:13.933 [2024-07-12 01:56:40.251621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:13.933 qpair failed and we were unable to recover it. 00:38:14.197 [2024-07-12 01:56:40.261486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.197 [2024-07-12 01:56:40.261564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.197 [2024-07-12 01:56:40.261576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.197 [2024-07-12 01:56:40.261582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.197 [2024-07-12 01:56:40.261586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.197 [2024-07-12 01:56:40.261598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.197 qpair failed and we were unable to recover it. 00:38:14.197 [2024-07-12 01:56:40.271564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.197 [2024-07-12 01:56:40.271667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.197 [2024-07-12 01:56:40.271680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.197 [2024-07-12 01:56:40.271685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.197 [2024-07-12 01:56:40.271690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.197 [2024-07-12 01:56:40.271701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.197 qpair failed and we were unable to recover it. 00:38:14.197 [2024-07-12 01:56:40.281467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.197 [2024-07-12 01:56:40.281523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.197 [2024-07-12 01:56:40.281534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.197 [2024-07-12 01:56:40.281539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.197 [2024-07-12 01:56:40.281544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.197 [2024-07-12 01:56:40.281555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.197 qpair failed and we were unable to recover it. 00:38:14.197 [2024-07-12 01:56:40.291592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.197 [2024-07-12 01:56:40.291688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.197 [2024-07-12 01:56:40.291701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.197 [2024-07-12 01:56:40.291706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.197 [2024-07-12 01:56:40.291710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.197 [2024-07-12 01:56:40.291722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.197 qpair failed and we were unable to recover it. 00:38:14.197 [2024-07-12 01:56:40.301619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.197 [2024-07-12 01:56:40.301670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.197 [2024-07-12 01:56:40.301682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.197 [2024-07-12 01:56:40.301688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.197 [2024-07-12 01:56:40.301693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.197 [2024-07-12 01:56:40.301704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.197 qpair failed and we were unable to recover it. 00:38:14.197 [2024-07-12 01:56:40.311677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.197 [2024-07-12 01:56:40.311739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.197 [2024-07-12 01:56:40.311754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.197 [2024-07-12 01:56:40.311759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.197 [2024-07-12 01:56:40.311763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.197 [2024-07-12 01:56:40.311774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.197 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.321688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.321740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.321752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.321757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.321762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.321772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.331660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.331720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.331731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.331736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.331741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.331751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.341806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.341870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.341881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.341886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.341891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.341901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.351781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.351832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.351843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.351848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.351853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.351867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.361792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.361844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.361855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.361860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.361865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.361875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.371841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.371900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.371911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.371917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.371921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.371931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.381866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.381923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.381942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.381948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.381953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.381968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.391811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.391916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.391929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.391935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.391939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.391950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.401782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.401913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.401933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.401938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.401943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.401955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.411941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.412018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.412037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.412044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.412049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.412063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.421977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.422034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.422053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.422059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.422064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.422078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.432000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.432062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.432081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.432087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.432093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.432107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.442016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.442072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.442085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.442090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.442099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.442111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.452101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.198 [2024-07-12 01:56:40.452174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.198 [2024-07-12 01:56:40.452186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.198 [2024-07-12 01:56:40.452191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.198 [2024-07-12 01:56:40.452196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.198 [2024-07-12 01:56:40.452208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.198 qpair failed and we were unable to recover it. 00:38:14.198 [2024-07-12 01:56:40.462184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.462270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.462283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.462288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.462292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.462303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.472028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.472084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.472096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.472101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.472105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.472116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.482187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.482244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.482256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.482261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.482265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.482276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.492222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.492296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.492308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.492314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.492318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.492329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.502238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.502310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.502322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.502327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.502332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.502342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.512203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.512256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.512267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.512273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.512277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.512288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.522227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.522318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.522329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.522334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.522339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.522349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.532264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.532323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.532334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.532339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.532346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.532356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.199 [2024-07-12 01:56:40.542287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.199 [2024-07-12 01:56:40.542388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.199 [2024-07-12 01:56:40.542400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.199 [2024-07-12 01:56:40.542405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.199 [2024-07-12 01:56:40.542409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.199 [2024-07-12 01:56:40.542419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.199 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.552359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.552424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.552435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.552440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.552444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.552455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.562244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.562348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.562360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.562365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.562370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.562380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.572371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.572428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.572439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.572444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.572449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.572459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.582385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.582441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.582452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.582457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.582462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.582472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.592309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.592364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.592376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.592381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.592385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.592395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.602470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.602526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.602537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.602542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.602547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.602557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.612491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.612555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.612566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.612571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.612576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.612586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.622546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.622610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.622621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.622629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.622633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.622644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.632527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.632579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.632590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.632595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.632600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.632610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.642561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.642616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.642627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.642632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.642636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.642646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.652480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.652539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.652553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.652558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.652562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.652573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.662621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.463 [2024-07-12 01:56:40.662675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.463 [2024-07-12 01:56:40.662686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.463 [2024-07-12 01:56:40.662691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.463 [2024-07-12 01:56:40.662695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.463 [2024-07-12 01:56:40.662705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.463 qpair failed and we were unable to recover it. 00:38:14.463 [2024-07-12 01:56:40.672665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.672717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.672728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.672733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.672737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.672747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.682703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.682756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.682767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.682772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.682776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.682786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.692598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.692659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.692670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.692675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.692680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.692690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.702756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.702806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.702818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.702823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.702827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.702837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.712662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.712726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.712740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.712745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.712750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.712759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.722793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.722853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.722864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.722869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.722874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.722884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.732815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.732876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.732887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.732892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.732897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.732906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.742868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.742919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.742930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.742935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.742939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.742949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.752812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.752878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.752890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.752895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.752899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.752912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.762942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.763012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.763023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.763028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.763033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.763043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.772950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.773011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.773022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.773027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.773031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.773041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.782845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.782893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.782905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.782909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.782914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.782924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.792998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.793054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.793065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.793070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.793075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.793085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.803026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.803086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.803099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.464 [2024-07-12 01:56:40.803104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.464 [2024-07-12 01:56:40.803109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.464 [2024-07-12 01:56:40.803119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.464 qpair failed and we were unable to recover it. 00:38:14.464 [2024-07-12 01:56:40.813055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.464 [2024-07-12 01:56:40.813132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.464 [2024-07-12 01:56:40.813143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.465 [2024-07-12 01:56:40.813148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.465 [2024-07-12 01:56:40.813153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.465 [2024-07-12 01:56:40.813163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.465 qpair failed and we were unable to recover it. 00:38:14.727 [2024-07-12 01:56:40.823070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.823123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.823134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.823139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.823144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.823154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.832987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.833040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.833051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.833056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.833061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.833071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.843148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.843201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.843212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.843217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.843221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.843244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.853158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.853214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.853225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.853233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.853238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.853248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.863188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.863250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.863261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.863266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.863271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.863281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.873195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.873251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.873262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.873267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.873271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.873281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.883263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.883318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.883328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.883333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.883338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.883348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.893157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.893218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.893233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.893238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.893242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.893252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.903294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.903343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.903354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.903359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.903364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.903374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.913326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.913379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.913391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.913396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.913400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.913410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.923380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.923432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.923443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.923448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.923453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.923462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.933382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.933443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.933454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.933459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.933466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.933476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.943421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.943475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.943486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.943491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.943495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.943505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.728 qpair failed and we were unable to recover it. 00:38:14.728 [2024-07-12 01:56:40.953447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.728 [2024-07-12 01:56:40.953500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.728 [2024-07-12 01:56:40.953511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.728 [2024-07-12 01:56:40.953516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.728 [2024-07-12 01:56:40.953520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.728 [2024-07-12 01:56:40.953531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:40.963488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:40.963551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:40.963562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:40.963567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:40.963572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:40.963582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:40.973515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:40.973579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:40.973590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:40.973594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:40.973599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:40.973609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:40.983402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:40.983454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:40.983465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:40.983470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:40.983474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:40.983484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:40.993555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:40.993604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:40.993615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:40.993620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:40.993625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:40.993635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.003595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.003651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.003662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.003668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.003672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.003682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.013600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.013660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.013671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.013676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.013680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.013690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.023628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.023676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.023687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.023695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.023699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.023709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.033667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.033717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.033729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.033734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.033738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.033748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.043715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.043768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.043779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.043784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.043788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.043798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.053723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.053781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.053792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.053797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.053802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.053811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.063745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.063798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.063809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.063814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.063818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.063828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.729 [2024-07-12 01:56:41.073770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.729 [2024-07-12 01:56:41.073823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.729 [2024-07-12 01:56:41.073833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.729 [2024-07-12 01:56:41.073838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.729 [2024-07-12 01:56:41.073843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.729 [2024-07-12 01:56:41.073853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.729 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.083790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.083844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.083855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.083860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.083864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.083875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.093842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.093900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.093911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.093916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.093920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.093930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.103872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.103926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.103937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.103942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.103946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.103956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.113884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.113933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.113949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.113954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.113959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.113970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.123943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.123995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.124006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.124011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.124015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.124025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.133820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.133877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.133888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.133893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.133898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.133908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.143988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.144038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.144052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.144057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.144062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.144073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.154003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.154065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.154084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.154090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.154095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.154116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.163971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.164043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.164056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.164061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.164065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.164076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.993 [2024-07-12 01:56:41.174075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.993 [2024-07-12 01:56:41.174136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.993 [2024-07-12 01:56:41.174148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.993 [2024-07-12 01:56:41.174153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.993 [2024-07-12 01:56:41.174157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.993 [2024-07-12 01:56:41.174168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.993 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.184068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.184118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.184130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.184135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.184140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.184150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.194053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.194110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.194122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.194127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.194131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.194142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.204185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.204247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.204261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.204266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.204270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.204281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.214188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.214249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.214260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.214265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.214269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.214279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.224201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.224252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.224263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.224268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.224273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.224283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.234114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.234163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.234174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.234180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.234184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.234195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.244262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.244316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.244327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.244332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.244336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.244350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.254281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.254336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.254348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.254353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.254357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.254369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.264303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.264354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.264366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.264371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.264375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.264386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.274272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.274335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.274346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.274351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.274355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.274366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.284366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.284422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.284433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.284438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.284442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.284452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.294377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.294431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.294444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.294449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.294454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.294464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.304414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.304474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.304487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.304492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.304496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.304509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.314391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.314457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.314468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.994 [2024-07-12 01:56:41.314473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.994 [2024-07-12 01:56:41.314478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.994 [2024-07-12 01:56:41.314488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.994 qpair failed and we were unable to recover it. 00:38:14.994 [2024-07-12 01:56:41.324476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.994 [2024-07-12 01:56:41.324536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.994 [2024-07-12 01:56:41.324547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.995 [2024-07-12 01:56:41.324552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.995 [2024-07-12 01:56:41.324557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.995 [2024-07-12 01:56:41.324567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.995 qpair failed and we were unable to recover it. 00:38:14.995 [2024-07-12 01:56:41.334516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.995 [2024-07-12 01:56:41.334570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.995 [2024-07-12 01:56:41.334581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.995 [2024-07-12 01:56:41.334586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.995 [2024-07-12 01:56:41.334593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.995 [2024-07-12 01:56:41.334603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.995 qpair failed and we were unable to recover it. 00:38:14.995 [2024-07-12 01:56:41.344549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:14.995 [2024-07-12 01:56:41.344607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:14.995 [2024-07-12 01:56:41.344619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:14.995 [2024-07-12 01:56:41.344623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.995 [2024-07-12 01:56:41.344628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:14.995 [2024-07-12 01:56:41.344638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.995 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.354584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.354637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.354648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.354653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.354658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.354667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.364605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.364666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.364677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.364682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.364687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.364697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.374643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.374697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.374709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.374714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.374718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.374729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.384634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.384691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.384702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.384707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.384712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.384722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.394542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.394599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.394610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.394615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.394620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.394630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.404586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.404643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.404654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.404659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.404664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.404674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.414729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.414786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.414797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.414802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.414807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.414817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.424751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.424801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.424812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.424820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.424825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.424835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.434766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.434817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.434828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.434833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.434837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.434847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.444815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.444867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.444878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.444883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.444887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.444897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.454830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.454884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.454896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.454901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.454905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.454915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.464826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.464923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.464942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.464949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.464954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.464969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.474877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.474934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.474953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.474959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.474964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.259 [2024-07-12 01:56:41.474978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.259 qpair failed and we were unable to recover it. 00:38:15.259 [2024-07-12 01:56:41.484877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.259 [2024-07-12 01:56:41.484968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.259 [2024-07-12 01:56:41.484980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.259 [2024-07-12 01:56:41.484986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.259 [2024-07-12 01:56:41.484991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.485001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.494944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.494998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.495009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.495014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.495018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.495029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.504964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.505012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.505024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.505029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.505033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.505043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.514971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.515095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.515107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.515115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.515120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.515131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.525037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.525090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.525101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.525106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.525111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.525121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.535077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.535135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.535147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.535152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.535157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.535167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.545074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.545128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.545140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.545145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.545149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.545159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.555115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.555165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.555176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.555181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.555185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.555196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.565159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.565214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.565225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.565234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.565239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.565250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.575079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.575136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.575147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.575152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.575157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.575167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.585121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.585181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.585192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.585197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.585201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.585212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.595212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.595271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.595282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.595287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.595291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.595302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.260 [2024-07-12 01:56:41.605269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.260 [2024-07-12 01:56:41.605321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.260 [2024-07-12 01:56:41.605335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.260 [2024-07-12 01:56:41.605340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.260 [2024-07-12 01:56:41.605345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.260 [2024-07-12 01:56:41.605355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.260 qpair failed and we were unable to recover it. 00:38:15.524 [2024-07-12 01:56:41.615298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.524 [2024-07-12 01:56:41.615351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.524 [2024-07-12 01:56:41.615362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.524 [2024-07-12 01:56:41.615367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.524 [2024-07-12 01:56:41.615372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.524 [2024-07-12 01:56:41.615382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.524 qpair failed and we were unable to recover it. 00:38:15.524 [2024-07-12 01:56:41.625332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.524 [2024-07-12 01:56:41.625383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.524 [2024-07-12 01:56:41.625394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.524 [2024-07-12 01:56:41.625399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.524 [2024-07-12 01:56:41.625403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.524 [2024-07-12 01:56:41.625414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.524 qpair failed and we were unable to recover it. 00:38:15.524 [2024-07-12 01:56:41.635275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.524 [2024-07-12 01:56:41.635338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.524 [2024-07-12 01:56:41.635349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.524 [2024-07-12 01:56:41.635354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.524 [2024-07-12 01:56:41.635358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.524 [2024-07-12 01:56:41.635369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.524 qpair failed and we were unable to recover it. 00:38:15.524 [2024-07-12 01:56:41.645403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.524 [2024-07-12 01:56:41.645455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.524 [2024-07-12 01:56:41.645466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.524 [2024-07-12 01:56:41.645471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.524 [2024-07-12 01:56:41.645475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.524 [2024-07-12 01:56:41.645489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.524 qpair failed and we were unable to recover it. 00:38:15.524 [2024-07-12 01:56:41.655419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.524 [2024-07-12 01:56:41.655517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.524 [2024-07-12 01:56:41.655528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.524 [2024-07-12 01:56:41.655533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.524 [2024-07-12 01:56:41.655537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.524 [2024-07-12 01:56:41.655547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.524 qpair failed and we were unable to recover it. 00:38:15.524 [2024-07-12 01:56:41.665430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.524 [2024-07-12 01:56:41.665515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.665526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.665531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.665535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.665546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.675457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.675509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.675521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.675526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.675530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.675541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.685494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.685549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.685561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.685566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.685570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.685581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.695528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.695584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.695598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.695604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.695608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.695618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.705551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.705602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.705613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.705618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.705622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.705632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.715442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.715503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.715514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.715519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.715523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.715533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.725602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.725653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.725664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.725669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.725673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.725684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.735618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.735676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.735687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.735693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.735700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.735710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.745535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.745620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.745632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.745637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.745642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.745653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.755668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.755762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.755774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.755779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.755784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.755794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.765694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.765763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.765774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.765779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.765783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.765794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.775723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.775781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.775792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.775797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.775801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.775812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.785651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.785711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.785723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.785728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.785733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.785743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.795810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.795859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.795870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.525 [2024-07-12 01:56:41.795875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.525 [2024-07-12 01:56:41.795880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.525 [2024-07-12 01:56:41.795890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.525 qpair failed and we were unable to recover it. 00:38:15.525 [2024-07-12 01:56:41.805855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.525 [2024-07-12 01:56:41.805910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.525 [2024-07-12 01:56:41.805921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.805926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.805930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.805940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.815737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.815799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.815810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.815815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.815819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.815829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.825878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.825931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.825943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.825953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.825957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.825967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.835919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.835975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.835993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.835999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.836004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.836018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.845952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.846008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.846027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.846033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.846038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.846052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.855967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.856029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.856048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.856053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.856059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.856073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.865902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.865952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.865965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.865970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.865974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.865986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.526 [2024-07-12 01:56:41.876024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.526 [2024-07-12 01:56:41.876081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.526 [2024-07-12 01:56:41.876092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.526 [2024-07-12 01:56:41.876098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.526 [2024-07-12 01:56:41.876102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.526 [2024-07-12 01:56:41.876113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.526 qpair failed and we were unable to recover it. 00:38:15.789 [2024-07-12 01:56:41.885917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.789 [2024-07-12 01:56:41.885974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.789 [2024-07-12 01:56:41.885986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.789 [2024-07-12 01:56:41.885991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.789 [2024-07-12 01:56:41.885996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.789 [2024-07-12 01:56:41.886008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.789 qpair failed and we were unable to recover it. 00:38:15.789 [2024-07-12 01:56:41.896078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.789 [2024-07-12 01:56:41.896136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.789 [2024-07-12 01:56:41.896147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.789 [2024-07-12 01:56:41.896152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.896157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.896168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.906120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.906173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.906184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.906189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.906193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.906204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.916133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.916184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.916195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.916203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.916208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.916218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.926164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.926217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.926232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.926237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.926242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.926253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.936191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.936250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.936262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.936267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.936271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.936282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.946201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.946257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.946269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.946274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.946278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.946289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.956238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.956286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.956298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.956303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.956307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.956317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.966266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.966318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.966329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.966334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.966338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.966348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.976166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.976224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.976240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.976246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.976250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.976265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.986339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.986390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.986402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.986407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.986411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.986422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:41.996358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:41.996411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:41.996422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:41.996428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:41.996432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:41.996442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:42.006390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:42.006445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:42.006459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:42.006464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:42.006469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:42.006479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:42.016416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:42.016476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:42.016488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:42.016493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:42.016497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:42.016507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:42.026419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:42.026471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:42.026483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:42.026488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:42.026492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.790 [2024-07-12 01:56:42.026502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.790 qpair failed and we were unable to recover it. 00:38:15.790 [2024-07-12 01:56:42.036488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.790 [2024-07-12 01:56:42.036554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.790 [2024-07-12 01:56:42.036566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.790 [2024-07-12 01:56:42.036571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.790 [2024-07-12 01:56:42.036576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.036586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.046513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.046564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.046576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.046581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.046585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.046598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.056409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.056477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.056488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.056493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.056498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.056508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.066440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.066541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.066552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.066557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.066562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.066572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.076567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.076617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.076628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.076634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.076638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.076649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.086628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.086681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.086692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.086697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.086702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.086712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.096654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.096716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.096730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.096735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.096740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.096750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.106656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.106707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.106718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.106723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.106728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.106738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.116701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.116759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.116771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.116776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.116781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.116791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.126750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.126807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.126818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.126823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.126827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.126837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:15.791 [2024-07-12 01:56:42.136768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:15.791 [2024-07-12 01:56:42.136826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:15.791 [2024-07-12 01:56:42.136837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:15.791 [2024-07-12 01:56:42.136842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:15.791 [2024-07-12 01:56:42.136849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:15.791 [2024-07-12 01:56:42.136859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.791 qpair failed and we were unable to recover it. 00:38:16.054 [2024-07-12 01:56:42.146783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.054 [2024-07-12 01:56:42.146831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.054 [2024-07-12 01:56:42.146842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.054 [2024-07-12 01:56:42.146847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.054 [2024-07-12 01:56:42.146851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.054 [2024-07-12 01:56:42.146862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.054 qpair failed and we were unable to recover it. 00:38:16.054 [2024-07-12 01:56:42.156840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.054 [2024-07-12 01:56:42.156891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.054 [2024-07-12 01:56:42.156902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.156907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.156911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.156921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.166849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.166901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.166914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.166919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.166923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.166934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.176867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.176928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.176939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.176944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.176948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.176960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.186886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.186950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.186969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.186975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.186980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.186994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.196872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.196934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.196953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.196960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.196964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.196978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.206953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.207008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.207021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.207026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.207031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.207042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.216980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.217040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.217059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.217065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.217070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.217084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.227035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.227085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.227097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.227103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.227110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.227121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.236999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.237047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.237059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.237064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.237068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.237079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.246937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.246997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.247008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.247013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.247017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.247028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.257099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.257175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.257186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.257191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.257196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.257206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.267089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.267132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.267143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.267148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.267152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.267162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.277114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.277164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.277175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.277180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.277184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.277194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.287172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.287227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.287242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.287248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.287252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.055 [2024-07-12 01:56:42.287262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.055 qpair failed and we were unable to recover it. 00:38:16.055 [2024-07-12 01:56:42.297130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.055 [2024-07-12 01:56:42.297185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.055 [2024-07-12 01:56:42.297196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.055 [2024-07-12 01:56:42.297201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.055 [2024-07-12 01:56:42.297205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.297215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.307271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.307364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.307375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.307380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.307384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.307395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.317161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.317209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.317220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.317228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.317235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.317245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.327162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.327218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.327233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.327239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.327243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.327254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.337309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.337365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.337377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.337382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.337386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.337396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.347297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.347340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.347351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.347357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.347361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.347371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.357335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.357379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.357391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.357396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.357400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.357411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.367390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.367446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.367457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.367462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.367467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.367477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.377402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.377460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.377471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.377476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.377481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.377491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.387296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.387347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.387359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.387364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.387369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.387379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.397320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.397369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.397381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.397386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.397391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.397401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.056 [2024-07-12 01:56:42.407533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.056 [2024-07-12 01:56:42.407584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.056 [2024-07-12 01:56:42.407598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.056 [2024-07-12 01:56:42.407603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.056 [2024-07-12 01:56:42.407608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.056 [2024-07-12 01:56:42.407618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.056 qpair failed and we were unable to recover it. 00:38:16.319 [2024-07-12 01:56:42.417528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.319 [2024-07-12 01:56:42.417583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.319 [2024-07-12 01:56:42.417594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.319 [2024-07-12 01:56:42.417599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.319 [2024-07-12 01:56:42.417604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.319 [2024-07-12 01:56:42.417614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.319 qpair failed and we were unable to recover it. 00:38:16.319 [2024-07-12 01:56:42.427416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.319 [2024-07-12 01:56:42.427465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.319 [2024-07-12 01:56:42.427475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.319 [2024-07-12 01:56:42.427480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.319 [2024-07-12 01:56:42.427485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.319 [2024-07-12 01:56:42.427494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.319 qpair failed and we were unable to recover it. 00:38:16.319 [2024-07-12 01:56:42.437565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.319 [2024-07-12 01:56:42.437612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.319 [2024-07-12 01:56:42.437623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.319 [2024-07-12 01:56:42.437628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.319 [2024-07-12 01:56:42.437632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.437642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.447620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.447674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.447685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.447689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.447694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.447707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.457647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.457706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.457717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.457722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.457726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.457736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.467591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.467656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.467667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.467671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.467676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.467686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.477704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.477750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.477760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.477766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.477770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.477780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.487800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.487856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.487868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.487872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.487877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.487887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.497823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.497887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.497901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.497906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.497910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.497921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.507781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.507826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.507837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.507841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.507846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.507856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.517803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.517889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.517900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.517905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.517910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.517920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.527739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.527793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.527804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.527809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.527814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.527824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.537861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.537918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.320 [2024-07-12 01:56:42.537929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.320 [2024-07-12 01:56:42.537934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.320 [2024-07-12 01:56:42.537939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.320 [2024-07-12 01:56:42.537954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.320 qpair failed and we were unable to recover it. 00:38:16.320 [2024-07-12 01:56:42.547922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.320 [2024-07-12 01:56:42.548005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.548016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.548021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.548025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.548036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.557823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.557869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.557882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.557887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.557891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.557902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.567956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.568010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.568023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.568028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.568033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.568043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.577953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.578010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.578021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.578025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.578030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.578040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.587968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.588062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.588073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.588078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.588083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.588093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.598083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.598136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.598147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.598152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.598157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.598167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.608068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.608127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.608138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.608143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.608148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.608157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.618094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.618145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.618156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.618161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.618166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.618175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.628007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.628065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.628076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.628081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.628089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.628099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.638000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.638046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.638058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.638063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.638067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.638077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.321 [2024-07-12 01:56:42.648191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.321 [2024-07-12 01:56:42.648247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.321 [2024-07-12 01:56:42.648259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.321 [2024-07-12 01:56:42.648264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.321 [2024-07-12 01:56:42.648268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.321 [2024-07-12 01:56:42.648279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.321 qpair failed and we were unable to recover it. 00:38:16.322 [2024-07-12 01:56:42.658216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.322 [2024-07-12 01:56:42.658277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.322 [2024-07-12 01:56:42.658288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.322 [2024-07-12 01:56:42.658293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.322 [2024-07-12 01:56:42.658297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.322 [2024-07-12 01:56:42.658307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.322 qpair failed and we were unable to recover it. 00:38:16.322 [2024-07-12 01:56:42.668224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.322 [2024-07-12 01:56:42.668270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.322 [2024-07-12 01:56:42.668281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.322 [2024-07-12 01:56:42.668286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.322 [2024-07-12 01:56:42.668291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.322 [2024-07-12 01:56:42.668301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.322 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.678113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.678158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.678169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.678174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.678178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.678188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.688318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.688371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.688382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.688387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.688391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.688402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.698328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.698387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.698398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.698403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.698407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.698417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.708307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.708391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.708401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.708406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.708411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.708421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.718381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.718473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.718484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.718493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.718497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.718507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.728438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.728493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.728504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.728509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.728513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.728523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.738449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.738509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.738520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.738525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.738529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.738539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.748407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.748450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.748462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.748467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.585 [2024-07-12 01:56:42.748472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.585 [2024-07-12 01:56:42.748482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.585 qpair failed and we were unable to recover it. 00:38:16.585 [2024-07-12 01:56:42.758381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.585 [2024-07-12 01:56:42.758430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.585 [2024-07-12 01:56:42.758441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.585 [2024-07-12 01:56:42.758446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.758451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.758461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.768542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.768596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.768607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.768611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.768616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.768626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.778552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.778613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.778624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.778629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.778634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.778644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.788559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.788602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.788613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.788618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.788623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.788633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.798584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.798635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.798647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.798652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.798656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.798667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.808637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.808695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.808709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.808714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.808718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.808728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.818670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.818723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.818734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.818739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.818744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.818754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.828643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.828688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.828700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.828706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.828710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.828720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.838696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.838771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.838782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.838787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.838792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.838802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.848758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.848811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.848823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.848828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.848833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.848846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.858767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.858821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.858832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.858837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.858842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.858852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.868760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.868811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.868822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.868827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.868831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.868842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.878808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.878852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.878863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.878868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.878872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.878882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.888857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.888911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.888922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.888928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.888932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.888942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.586 qpair failed and we were unable to recover it. 00:38:16.586 [2024-07-12 01:56:42.898874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.586 [2024-07-12 01:56:42.898931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.586 [2024-07-12 01:56:42.898953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.586 [2024-07-12 01:56:42.898960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.586 [2024-07-12 01:56:42.898965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.586 [2024-07-12 01:56:42.898979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.587 qpair failed and we were unable to recover it. 00:38:16.587 [2024-07-12 01:56:42.908868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.587 [2024-07-12 01:56:42.908918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.587 [2024-07-12 01:56:42.908937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.587 [2024-07-12 01:56:42.908943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.587 [2024-07-12 01:56:42.908948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.587 [2024-07-12 01:56:42.908962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.587 qpair failed and we were unable to recover it. 00:38:16.587 [2024-07-12 01:56:42.918925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.587 [2024-07-12 01:56:42.919005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.587 [2024-07-12 01:56:42.919024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.587 [2024-07-12 01:56:42.919030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.587 [2024-07-12 01:56:42.919035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.587 [2024-07-12 01:56:42.919049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.587 qpair failed and we were unable to recover it. 00:38:16.587 [2024-07-12 01:56:42.928896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.587 [2024-07-12 01:56:42.928952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.587 [2024-07-12 01:56:42.928965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.587 [2024-07-12 01:56:42.928970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.587 [2024-07-12 01:56:42.928975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.587 [2024-07-12 01:56:42.928987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.587 qpair failed and we were unable to recover it. 00:38:16.587 [2024-07-12 01:56:42.938991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.587 [2024-07-12 01:56:42.939055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.587 [2024-07-12 01:56:42.939067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.587 [2024-07-12 01:56:42.939072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.587 [2024-07-12 01:56:42.939076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.587 [2024-07-12 01:56:42.939091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.587 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:42.948850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:42.948895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:42.948907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:42.948912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:42.948916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:42.948927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:42.959067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:42.959115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:42.959126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:42.959131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:42.959135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:42.959145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:42.969081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:42.969137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:42.969148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:42.969153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:42.969157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:42.969168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:42.979098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:42.979160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:42.979171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:42.979176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:42.979180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:42.979190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:42.989078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:42.989159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:42.989171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:42.989176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:42.989180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:42.989190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:42.999091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:42.999134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:42.999145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:42.999150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:42.999154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:42.999165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.009185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.009240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.009251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.009257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.009261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.009272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.019238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.019327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.019339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.019343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.019349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.019360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.029211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.029260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.029271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.029276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.029284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.029294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.039260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.039304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.039314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.039320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.039324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.039334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.049323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.049373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.049384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.049389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.049394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.049404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.059342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.059408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.059419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.059424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.059429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.059439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.069277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.069363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.069374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.069380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.069385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.069396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.079324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.079373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.079385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.079390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.079394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.079404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.089423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.089495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.089506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.089511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.089516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.089526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.099303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.099408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.099420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.099425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.099429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.099440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.109424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.109469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.109480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.109485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.109490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.109500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.119474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.119526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.119538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.119545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.119550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.119560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.129503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.129588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.129599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.129603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.129608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.129619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.139517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.139571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.139582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.139587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.139592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.139602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.149524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.149610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.149621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.149626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.850 [2024-07-12 01:56:43.149630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.850 [2024-07-12 01:56:43.149640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.850 qpair failed and we were unable to recover it. 00:38:16.850 [2024-07-12 01:56:43.159433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.850 [2024-07-12 01:56:43.159476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.850 [2024-07-12 01:56:43.159487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.850 [2024-07-12 01:56:43.159492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.851 [2024-07-12 01:56:43.159497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.851 [2024-07-12 01:56:43.159507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.851 qpair failed and we were unable to recover it. 00:38:16.851 [2024-07-12 01:56:43.169620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.851 [2024-07-12 01:56:43.169673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.851 [2024-07-12 01:56:43.169684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.851 [2024-07-12 01:56:43.169689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.851 [2024-07-12 01:56:43.169693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.851 [2024-07-12 01:56:43.169703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.851 qpair failed and we were unable to recover it. 00:38:16.851 [2024-07-12 01:56:43.179693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.851 [2024-07-12 01:56:43.179745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.851 [2024-07-12 01:56:43.179755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.851 [2024-07-12 01:56:43.179760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.851 [2024-07-12 01:56:43.179765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.851 [2024-07-12 01:56:43.179774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.851 qpair failed and we were unable to recover it. 00:38:16.851 [2024-07-12 01:56:43.189629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.851 [2024-07-12 01:56:43.189673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.851 [2024-07-12 01:56:43.189684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.851 [2024-07-12 01:56:43.189689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.851 [2024-07-12 01:56:43.189693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.851 [2024-07-12 01:56:43.189703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.851 qpair failed and we were unable to recover it. 00:38:16.851 [2024-07-12 01:56:43.199660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:16.851 [2024-07-12 01:56:43.199707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:16.851 [2024-07-12 01:56:43.199718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:16.851 [2024-07-12 01:56:43.199723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:16.851 [2024-07-12 01:56:43.199728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:16.851 [2024-07-12 01:56:43.199738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.851 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.209728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.209777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.209789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.209797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.209801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.209812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.219758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.219815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.219826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.219831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.219835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.219845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.229763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.229810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.229820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.229825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.229830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.229839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.239779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.239830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.239840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.239845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.239850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.239860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.249869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.249920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.249931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.249936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.249940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.249950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.259865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.259925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.259945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.259951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.259956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.259970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.269908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.269956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.269972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.269977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.113 [2024-07-12 01:56:43.269982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.113 [2024-07-12 01:56:43.269994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.113 qpair failed and we were unable to recover it. 00:38:17.113 [2024-07-12 01:56:43.279898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.113 [2024-07-12 01:56:43.279946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.113 [2024-07-12 01:56:43.279966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.113 [2024-07-12 01:56:43.279972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.279977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.279991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.289947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.290049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.290068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.290075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.290079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.290094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.300020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.300080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.300099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.300104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.300109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.300120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.309979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.310056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.310069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.310074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.310079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.310091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.320036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.320088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.320100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.320105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.320109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.320120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.330059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.330114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.330126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.330131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.330136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.330146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.340082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.340137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.340149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.340154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.340158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.340172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.349961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.350008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.350020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.350025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.350029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.350040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.359967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.360016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.360027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.360033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.360037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.360048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.370187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.370252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.370264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.370269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.370274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.370284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.380210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.380269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.380281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.380286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.380290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.380300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.390136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.390180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.390194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.390199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.390203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.390214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.400217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.400287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.400299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.400304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.400309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.400319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.410277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.410330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.410341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.410346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.410351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.114 [2024-07-12 01:56:43.410361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.114 qpair failed and we were unable to recover it. 00:38:17.114 [2024-07-12 01:56:43.420303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.114 [2024-07-12 01:56:43.420365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.114 [2024-07-12 01:56:43.420377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.114 [2024-07-12 01:56:43.420381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.114 [2024-07-12 01:56:43.420386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.115 [2024-07-12 01:56:43.420396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.115 qpair failed and we were unable to recover it. 00:38:17.115 [2024-07-12 01:56:43.430302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.115 [2024-07-12 01:56:43.430353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.115 [2024-07-12 01:56:43.430364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.115 [2024-07-12 01:56:43.430369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.115 [2024-07-12 01:56:43.430376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.115 [2024-07-12 01:56:43.430387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.115 qpair failed and we were unable to recover it. 00:38:17.115 [2024-07-12 01:56:43.440241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.115 [2024-07-12 01:56:43.440294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.115 [2024-07-12 01:56:43.440305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.115 [2024-07-12 01:56:43.440310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.115 [2024-07-12 01:56:43.440315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.115 [2024-07-12 01:56:43.440325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.115 qpair failed and we were unable to recover it. 00:38:17.115 [2024-07-12 01:56:43.450425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.115 [2024-07-12 01:56:43.450481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.115 [2024-07-12 01:56:43.450493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.115 [2024-07-12 01:56:43.450498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.115 [2024-07-12 01:56:43.450502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.115 [2024-07-12 01:56:43.450513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.115 qpair failed and we were unable to recover it. 00:38:17.115 [2024-07-12 01:56:43.460437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.115 [2024-07-12 01:56:43.460492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.115 [2024-07-12 01:56:43.460503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.115 [2024-07-12 01:56:43.460507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.115 [2024-07-12 01:56:43.460512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.115 [2024-07-12 01:56:43.460523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.115 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.470425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.470476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.470487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.470492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.470497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.470507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.377 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.480457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.480513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.480524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.480529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.480534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.480544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.377 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.490545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.490597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.490608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.490613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.490618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.490628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.377 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.500541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.500602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.500613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.500618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.500623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.500633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.377 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.510539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.510586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.510597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.510602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.510607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.510617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.377 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.520580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.520629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.520641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.520648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.520653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.520663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.377 qpair failed and we were unable to recover it. 00:38:17.377 [2024-07-12 01:56:43.530681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.377 [2024-07-12 01:56:43.530736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.377 [2024-07-12 01:56:43.530747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.377 [2024-07-12 01:56:43.530752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.377 [2024-07-12 01:56:43.530756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.377 [2024-07-12 01:56:43.530766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.540637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.540731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.540742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.540748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.540752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.540762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.550647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.550696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.550707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.550712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.550716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.550726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.560672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.560724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.560735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.560740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.560744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.560754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.570658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.570761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.570772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.570777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.570782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.570792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.580781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.580837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.580848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.580853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.580858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.580867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.590666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.590714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.590725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.590731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.590735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.590745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.600813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.600864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.600875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.600880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.600885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.600896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.610930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.610985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.610996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.611004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.611008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.611018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.620797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.620854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.620866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.620871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.620875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.620885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.630884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.630937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.630948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.630953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.630957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.630967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.640907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.640955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.640966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.640971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.640976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.640986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.651007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.651061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.651072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.651077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.378 [2024-07-12 01:56:43.651082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.378 [2024-07-12 01:56:43.651092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.378 qpair failed and we were unable to recover it. 00:38:17.378 [2024-07-12 01:56:43.660997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.378 [2024-07-12 01:56:43.661057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.378 [2024-07-12 01:56:43.661068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.378 [2024-07-12 01:56:43.661073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.661078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.661088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.670974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.671020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.671031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.671036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.671041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.671052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.681044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.681122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.681133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.681138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.681142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.681153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.691078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.691131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.691142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.691147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.691151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.691161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.701101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.701154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.701168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.701173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.701178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.701188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.711122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.711165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.711177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.711182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.711188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.711199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.720989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.721038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.721049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.721054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.721059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.721070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.379 [2024-07-12 01:56:43.731140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.379 [2024-07-12 01:56:43.731186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.379 [2024-07-12 01:56:43.731198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.379 [2024-07-12 01:56:43.731203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.379 [2024-07-12 01:56:43.731207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.379 [2024-07-12 01:56:43.731218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.379 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.741205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.741261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.741272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.741277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.741282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.741295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.751183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.751274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.751285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.751293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.751298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.751308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.761114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.761160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.761172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.761177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.761181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.761192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.771274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.771321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.771333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.771338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.771342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.771352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.781310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.781384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.781396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.781401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.781405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.781415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.791204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.791251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.791264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.791269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.791274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.791284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.801329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.801372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.801383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.801388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.801393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.801403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.811380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.811427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.811438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.811443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.811448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.811458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.821435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.821512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.821523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.821528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.821533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.821543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.831322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.831371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.831382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.831387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.831394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.831405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.841466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.841514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.841526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.841531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.841535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.841545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.851541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.851587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.851599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.851604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.851608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.851619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.861552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.861602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.861612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.861617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.861622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.861632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.871495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.871591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.871602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.871607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.641 [2024-07-12 01:56:43.871612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.641 [2024-07-12 01:56:43.871621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.641 qpair failed and we were unable to recover it. 00:38:17.641 [2024-07-12 01:56:43.881574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.641 [2024-07-12 01:56:43.881622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.641 [2024-07-12 01:56:43.881632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.641 [2024-07-12 01:56:43.881637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.881642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.881652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.891607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.891652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.891662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.891667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.891671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.891681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.901670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.901723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.901733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.901738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.901743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.901753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.911530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.911576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.911586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.911591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.911596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.911606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.921689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.921735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.921747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.921752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.921759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.921770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.931710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.931754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.931765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.931770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.931775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.931785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.941774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.941828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.941839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.941844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.941849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.941859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.951751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.951798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.951809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.951814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.951818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.951829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.961778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.961824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.961836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.961841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.961846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.961857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.971815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.971864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.971876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.971881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.971886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.971896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.981883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.981932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.981944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.981950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.981955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.981966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.642 [2024-07-12 01:56:43.991864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.642 [2024-07-12 01:56:43.991916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.642 [2024-07-12 01:56:43.991934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.642 [2024-07-12 01:56:43.991940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.642 [2024-07-12 01:56:43.991945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.642 [2024-07-12 01:56:43.991959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.642 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.001886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.001931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.001944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.001949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.001954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.001965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.011923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.012001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.012013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.012021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.012026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.012037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.021963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.022018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.022029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.022034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.022039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.022049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.031972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.032033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.032045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.032050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.032055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.032068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.041874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.041921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.041933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.041938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.041943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.041953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.052029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.052081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.052093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.052098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.052102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.052113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.062090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.062152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.062164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.062169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.062173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.062184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.072086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.072132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.072143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.072148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.072152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.072163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.081987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.082036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.082047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.082052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.082056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.082066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.092146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.092194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.092205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.904 [2024-07-12 01:56:44.092210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.904 [2024-07-12 01:56:44.092214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.904 [2024-07-12 01:56:44.092224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.904 qpair failed and we were unable to recover it. 00:38:17.904 [2024-07-12 01:56:44.102208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.904 [2024-07-12 01:56:44.102303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.904 [2024-07-12 01:56:44.102318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.102323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.102328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.102342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.112194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.112239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.112251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.112256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.112261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.112272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.122215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.122266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.122278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.122283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.122287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.122298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.132253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.132299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.132310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.132315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.132320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.132330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.142310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.142363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.142375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.142380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.142384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.142397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.152298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.152347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.152358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.152363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.152368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.152378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.162339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.162382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.162393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.162398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.162402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.162413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.172354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.172400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.172410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.172415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.172420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.172430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.182438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.182494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.182505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.182510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.182515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.182525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.192418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.192509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.192523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.192528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.192533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.192544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.202353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.202400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.202411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.202416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.202421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.202431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.212448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.212496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.212507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.212512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.212516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.212526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.222558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.222614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.222625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.222630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.222634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.222644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.232520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.232567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.232578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.232583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.232590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.232601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.905 [2024-07-12 01:56:44.242412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.905 [2024-07-12 01:56:44.242458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.905 [2024-07-12 01:56:44.242469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.905 [2024-07-12 01:56:44.242474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.905 [2024-07-12 01:56:44.242478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.905 [2024-07-12 01:56:44.242488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.905 qpair failed and we were unable to recover it. 00:38:17.906 [2024-07-12 01:56:44.252574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:17.906 [2024-07-12 01:56:44.252629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:17.906 [2024-07-12 01:56:44.252640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:17.906 [2024-07-12 01:56:44.252645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:17.906 [2024-07-12 01:56:44.252649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:17.906 [2024-07-12 01:56:44.252660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.906 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.262632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.262685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.262696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.262701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.262706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.262716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.272619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.272665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.272676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.272681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.272685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.272695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.282652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.282701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.282712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.282717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.282721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.282731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.292683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.292728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.292739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.292744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.292749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.292759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.302745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.302793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.302804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.302809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.302813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.302823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.312742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.312788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.312799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.312804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.312808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.312818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.322661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.322764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.322775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.322780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.322787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.322798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.332790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.332837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.332848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.332853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.332857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.332867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.342851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.168 [2024-07-12 01:56:44.342909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.168 [2024-07-12 01:56:44.342921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.168 [2024-07-12 01:56:44.342926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.168 [2024-07-12 01:56:44.342931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.168 [2024-07-12 01:56:44.342941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-07-12 01:56:44.352843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.352894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.352913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.352919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.352924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.352938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.362951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.363044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.363056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.363061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.363066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.363077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.372895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.372944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.372963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.372969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.372974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.372987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.382960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.383012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.383031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.383037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.383042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.383056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.392818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.392866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.392885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.392891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.392896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.392910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.402944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.402994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.403006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.403011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.403016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.403027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.413006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.413056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.413075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.413084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.413089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.413103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.423065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.423118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.423130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.423135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.423140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.423151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.433060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.433109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.433121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.433126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.433131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.433142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.442948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.442992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.443004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.443009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.443013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.443024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.453107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.453160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.453171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.453177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.453181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.453191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.463172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.463263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.463274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.463279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.463283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.463294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.473155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.473199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.473210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.473215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.473219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.473233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.483177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.483222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.483236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.483242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.169 [2024-07-12 01:56:44.483246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.169 [2024-07-12 01:56:44.483257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-07-12 01:56:44.493236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.169 [2024-07-12 01:56:44.493283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.169 [2024-07-12 01:56:44.493294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.169 [2024-07-12 01:56:44.493299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.170 [2024-07-12 01:56:44.493304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.170 [2024-07-12 01:56:44.493315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-07-12 01:56:44.503158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.170 [2024-07-12 01:56:44.503218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.170 [2024-07-12 01:56:44.503236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.170 [2024-07-12 01:56:44.503241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.170 [2024-07-12 01:56:44.503246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.170 [2024-07-12 01:56:44.503257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-07-12 01:56:44.513274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.170 [2024-07-12 01:56:44.513323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.170 [2024-07-12 01:56:44.513335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.170 [2024-07-12 01:56:44.513340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.170 [2024-07-12 01:56:44.513344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.170 [2024-07-12 01:56:44.513354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-07-12 01:56:44.523293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.170 [2024-07-12 01:56:44.523339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.170 [2024-07-12 01:56:44.523351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.170 [2024-07-12 01:56:44.523355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.170 [2024-07-12 01:56:44.523360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.170 [2024-07-12 01:56:44.523370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.533329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.533395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.533407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.533412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.533416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.533426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.543389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.543481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.543493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.543498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.543503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.543517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.553378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.553424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.553435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.553440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.553444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.553454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.563451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.563518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.563529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.563534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.563538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.563548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.573437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.573485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.573496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.573501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.573505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.573515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.583534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.583586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.583597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.583602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.583606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.583616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.593479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.593537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.593551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.593556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.593560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.593570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.603497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.603544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.603555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.603559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.603564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.603573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.613410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.613454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.613466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.613471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.613476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.613486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.623467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.623520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.623531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.623536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.623540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.623550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.633597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.633644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.633654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.633659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.633664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.633677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.643597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.643683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.643694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.643699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.432 [2024-07-12 01:56:44.643704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.432 [2024-07-12 01:56:44.643714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.432 qpair failed and we were unable to recover it. 00:38:18.432 [2024-07-12 01:56:44.653647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.432 [2024-07-12 01:56:44.653693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.432 [2024-07-12 01:56:44.653704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.432 [2024-07-12 01:56:44.653709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.653713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.653723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.663692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.663749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.663759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.663764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.663768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.663778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.673723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.673809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.673820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.673825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.673829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.673839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.683583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.683629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.683640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.683645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.683649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.683659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.693761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.693810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.693822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.693826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.693831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.693841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.703766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.703828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.703839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.703844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.703848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.703858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.713757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.713800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.713811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.713816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.713820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.713830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.723838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.723929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.723941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.723946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.723954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.723964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.733852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.733897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.733908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.733913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.733918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.733928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.743954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.744055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.744065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.744070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.744075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.744085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.753965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.754028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.754046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.754052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.754057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.754072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.763936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.763988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.764007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.764013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.764018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.764033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.773941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.773995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.774014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.774020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.774026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.774039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.433 [2024-07-12 01:56:44.784035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.433 [2024-07-12 01:56:44.784114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.433 [2024-07-12 01:56:44.784126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.433 [2024-07-12 01:56:44.784132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.433 [2024-07-12 01:56:44.784136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.433 [2024-07-12 01:56:44.784147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.433 qpair failed and we were unable to recover it. 00:38:18.696 [2024-07-12 01:56:44.793880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.793928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.793939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.793945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.793949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.793960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.804035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.804122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.804133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.804139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.804144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.804154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.814095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.814188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.814200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.814208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.814213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.814224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.824127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.824225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.824240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.824245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.824251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.824261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.833981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.834029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.834040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.834045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.834049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.834059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.844136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.844181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.844193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.844198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.844202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.844212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.854157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.854207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.854218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.854223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.854227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.854241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.864231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.864290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.864301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.864306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.864310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.864321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.874232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.874282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.874293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.874298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.874303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.874313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.884245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.884296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.884307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.884312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.884317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.884327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.894307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.894355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.894365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.894370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.894375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.894385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.904321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.904381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.904392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.904400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.697 [2024-07-12 01:56:44.904404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.697 [2024-07-12 01:56:44.904415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.697 qpair failed and we were unable to recover it. 00:38:18.697 [2024-07-12 01:56:44.914318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.697 [2024-07-12 01:56:44.914367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.697 [2024-07-12 01:56:44.914378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.697 [2024-07-12 01:56:44.914383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.914388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.914398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.924346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.924395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.924406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.924412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.924416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.924426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.934369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.934416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.934427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.934432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.934437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.934447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.944449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.944506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.944517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.944522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.944527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.944536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.954378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.954422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.954433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.954438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.954443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.954453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.964496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.964579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.964591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.964596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.964600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.964610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.974484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.974528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.974539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.974545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.974549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.974559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.984548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.984689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.984701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.984706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.984711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.984721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:44.994550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:44.994592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:44.994606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:44.994611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:44.994615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:44.994625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:45.004571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:45.004640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:45.004652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:45.004657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:45.004662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:45.004672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:45.014575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:45.014620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:45.014632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:45.014637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:45.014641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:45.014651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:45.024661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:45.024718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:45.024729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:45.024734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:45.024738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:45.024748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:45.034664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:45.034748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:45.034759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:45.034764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.698 [2024-07-12 01:56:45.034768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.698 [2024-07-12 01:56:45.034781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.698 qpair failed and we were unable to recover it. 00:38:18.698 [2024-07-12 01:56:45.044682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.698 [2024-07-12 01:56:45.044738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.698 [2024-07-12 01:56:45.044749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.698 [2024-07-12 01:56:45.044754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.699 [2024-07-12 01:56:45.044759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.699 [2024-07-12 01:56:45.044768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.699 qpair failed and we were unable to recover it. 00:38:18.961 [2024-07-12 01:56:45.054702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.961 [2024-07-12 01:56:45.054753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.961 [2024-07-12 01:56:45.054764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.961 [2024-07-12 01:56:45.054769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.961 [2024-07-12 01:56:45.054774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.961 [2024-07-12 01:56:45.054784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.961 qpair failed and we were unable to recover it. 00:38:18.961 [2024-07-12 01:56:45.064770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.961 [2024-07-12 01:56:45.064825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.961 [2024-07-12 01:56:45.064836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.961 [2024-07-12 01:56:45.064840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.961 [2024-07-12 01:56:45.064845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.961 [2024-07-12 01:56:45.064855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.961 qpair failed and we were unable to recover it. 00:38:18.961 [2024-07-12 01:56:45.074873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.074934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.074946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.074950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.074955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.074965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.084800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.084859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.084873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.084878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.084882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.084892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.094855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.094903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.094914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.094919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.094924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.094933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.104791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.104852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.104863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.104868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.104872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.104882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.114911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.114957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.114968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.114973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.114977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.114987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.124892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.124962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.124973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.124978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.124986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.124996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.134926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.134974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.134985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.134990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.134994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.135004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.144992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.145044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.145055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.145059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.145064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.145074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.154964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.155016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.155027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.155032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.155036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.155046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.164987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.165031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.165042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.165047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.165051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.165061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.175034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.175082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.175093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.175098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.175102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.175112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.185106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.185160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.962 [2024-07-12 01:56:45.185171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.962 [2024-07-12 01:56:45.185176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.962 [2024-07-12 01:56:45.185180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.962 [2024-07-12 01:56:45.185190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.962 qpair failed and we were unable to recover it. 00:38:18.962 [2024-07-12 01:56:45.194960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.962 [2024-07-12 01:56:45.195006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.195018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.195023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.195027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.195038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.205134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.205177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.205188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.205193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.205198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.205208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.215117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.215162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.215173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.215181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.215185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.215195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.225212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.225303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.225315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.225320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.225325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.225336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.235197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.235286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.235297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.235303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.235307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.235317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.245223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.245274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.245285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.245290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.245294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.245305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.255244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.255330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.255340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.255345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.255351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.255361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.265318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.265370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.265381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.265386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.265391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.265402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.275341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.275386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.275397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.275402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.275407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.275417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.285319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.285415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.285426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.285431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.285436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.285447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.295364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.295411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.295422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.295427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.295431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.295441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.305447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.305499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.305510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.305518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.305522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.963 [2024-07-12 01:56:45.305532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.963 qpair failed and we were unable to recover it. 00:38:18.963 [2024-07-12 01:56:45.315393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:18.963 [2024-07-12 01:56:45.315438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:18.963 [2024-07-12 01:56:45.315449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:18.963 [2024-07-12 01:56:45.315454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:18.963 [2024-07-12 01:56:45.315459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:18.964 [2024-07-12 01:56:45.315469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.964 qpair failed and we were unable to recover it. 00:38:19.226 [2024-07-12 01:56:45.325450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.226 [2024-07-12 01:56:45.325497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.226 [2024-07-12 01:56:45.325509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.226 [2024-07-12 01:56:45.325514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.226 [2024-07-12 01:56:45.325519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.226 [2024-07-12 01:56:45.325529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.226 qpair failed and we were unable to recover it. 00:38:19.226 [2024-07-12 01:56:45.335504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.226 [2024-07-12 01:56:45.335552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.226 [2024-07-12 01:56:45.335563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.335568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.335572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.335583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.345539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.345593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.345605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.345610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.345614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.345624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.355534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.355580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.355591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.355596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.355601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.355611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.365559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.365602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.365614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.365619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.365624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.365634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.375579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.375675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.375687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.375692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.375697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.375708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.385650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.385703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.385714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.385719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.385724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.385734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.395691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.395754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.395769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.395774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.395779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.395790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.405747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.405795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.405806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.405811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.405816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.405826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.415743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.415792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.415803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.415808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.415814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.415824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.425773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.425825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.425837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.425842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.425846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.425856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.435732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.435776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.435787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.435792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.435796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.435809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.445795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.445843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.445854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.445859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.445863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.445873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.455823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.455871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.455882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.455887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.455891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df8000b90 00:38:19.227 [2024-07-12 01:56:45.455901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.465914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.466036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.466100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.466125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.466145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df4000b90 00:38:19.227 [2024-07-12 01:56:45.466197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.227 qpair failed and we were unable to recover it. 00:38:19.227 [2024-07-12 01:56:45.475905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.227 [2024-07-12 01:56:45.475988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.227 [2024-07-12 01:56:45.476019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.227 [2024-07-12 01:56:45.476034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.227 [2024-07-12 01:56:45.476047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1df4000b90 00:38:19.227 [2024-07-12 01:56:45.476078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.228 qpair failed and we were unable to recover it. 00:38:19.228 [2024-07-12 01:56:45.476476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199c10 is same with the state(5) to be set 00:38:19.228 [2024-07-12 01:56:45.485798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.228 [2024-07-12 01:56:45.485860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.228 [2024-07-12 01:56:45.485885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.228 [2024-07-12 01:56:45.485895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.228 [2024-07-12 01:56:45.485904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x118c0a0 00:38:19.228 [2024-07-12 01:56:45.485922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:19.228 qpair failed and we were unable to recover it. 00:38:19.228 [2024-07-12 01:56:45.495899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.228 [2024-07-12 01:56:45.495962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.228 [2024-07-12 01:56:45.495987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.228 [2024-07-12 01:56:45.495996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.228 [2024-07-12 01:56:45.496003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x118c0a0 00:38:19.228 [2024-07-12 01:56:45.496022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:19.228 qpair failed and we were unable to recover it. 00:38:19.228 [2024-07-12 01:56:45.506035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.228 [2024-07-12 01:56:45.506191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.228 [2024-07-12 01:56:45.506267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.228 [2024-07-12 01:56:45.506292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.228 [2024-07-12 01:56:45.506313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dec000b90 00:38:19.228 [2024-07-12 01:56:45.506367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:19.228 qpair failed and we were unable to recover it. 00:38:19.228 [2024-07-12 01:56:45.515983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.228 [2024-07-12 01:56:45.516068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.228 [2024-07-12 01:56:45.516096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.228 [2024-07-12 01:56:45.516111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.228 [2024-07-12 01:56:45.516123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1dec000b90 00:38:19.228 [2024-07-12 01:56:45.516151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:19.228 qpair failed and we were unable to recover it. 00:38:19.228 [2024-07-12 01:56:45.516615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1199c10 (9): Bad file descriptor 00:38:19.228 Initializing NVMe Controllers 00:38:19.228 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:19.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:19.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:19.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:19.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:19.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:19.228 Initialization complete. Launching workers. 00:38:19.228 Starting thread on core 1 00:38:19.228 Starting thread on core 2 00:38:19.228 Starting thread on core 3 00:38:19.228 Starting thread on core 0 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:19.228 00:38:19.228 real 0m11.258s 00:38:19.228 user 0m21.476s 00:38:19.228 sys 0m3.663s 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.228 ************************************ 00:38:19.228 END TEST nvmf_target_disconnect_tc2 00:38:19.228 ************************************ 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:19.228 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:19.490 rmmod nvme_tcp 00:38:19.490 rmmod nvme_fabrics 00:38:19.490 rmmod nvme_keyring 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 75629 ']' 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 75629 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 75629 ']' 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 75629 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75629 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75629' 00:38:19.490 killing process with pid 75629 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 75629 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 75629 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:19.490 01:56:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.038 01:56:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:22.038 00:38:22.038 real 0m21.996s 00:38:22.038 user 0m49.158s 00:38:22.038 sys 0m9.950s 00:38:22.038 01:56:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:22.038 01:56:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:22.038 ************************************ 00:38:22.038 END TEST nvmf_target_disconnect 00:38:22.038 ************************************ 00:38:22.038 01:56:47 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:38:22.038 01:56:47 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:22.038 01:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.038 01:56:47 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:38:22.038 00:38:22.038 real 31m15.038s 00:38:22.038 user 77m7.031s 00:38:22.038 sys 8m46.216s 00:38:22.038 01:56:47 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:22.038 01:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.038 ************************************ 00:38:22.038 END TEST nvmf_tcp 00:38:22.038 ************************************ 00:38:22.038 01:56:48 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:38:22.038 01:56:48 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:22.038 01:56:48 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:22.038 01:56:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:22.038 01:56:48 -- common/autotest_common.sh@10 -- # set +x 00:38:22.038 ************************************ 00:38:22.038 START TEST spdkcli_nvmf_tcp 00:38:22.038 ************************************ 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:22.038 * Looking for test storage... 00:38:22.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:22.038 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=77463 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 77463 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 77463 ']' 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:22.039 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.039 [2024-07-12 01:56:48.235945] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:22.039 [2024-07-12 01:56:48.236001] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77463 ] 00:38:22.039 EAL: No free 2048 kB hugepages reported on node 1 00:38:22.039 [2024-07-12 01:56:48.300465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:22.039 [2024-07-12 01:56:48.332669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:22.039 [2024-07-12 01:56:48.332672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.984 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:22.984 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:38:22.984 01:56:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:22.984 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:22.984 01:56:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.984 01:56:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:22.984 01:56:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:22.984 01:56:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:22.984 01:56:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:22.984 01:56:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.984 01:56:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:22.984 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:22.984 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:22.984 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:22.984 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:22.984 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:22.984 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:22.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:22.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:22.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:22.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:22.984 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:22.984 ' 00:38:25.531 [2024-07-12 01:56:51.362903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.472 [2024-07-12 01:56:52.526719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:28.384 [2024-07-12 01:56:54.664979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:30.297 [2024-07-12 01:56:56.498479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:31.679 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:31.679 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:31.679 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:31.679 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:31.679 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:31.679 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:31.679 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:31.679 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:31.679 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:31.679 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:31.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:31.679 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:31.679 01:56:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:31.679 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:31.679 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 01:56:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:32.002 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:32.002 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 01:56:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:32.002 01:56:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.263 01:56:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:32.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:32.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:32.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:32.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:32.263 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:32.263 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:32.263 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:32.263 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:32.263 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:32.263 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:32.263 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:32.263 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:32.263 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:32.263 ' 00:38:37.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:37.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:37.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:37.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:37.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:37.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:37.546 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:37.546 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:37.546 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:37.546 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:37.546 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:37.546 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:37.546 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:37.546 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:37.546 01:57:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:37.546 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:37.546 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.805 01:57:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 77463 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 77463 ']' 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 77463 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77463 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77463' 00:38:37.806 killing process with pid 77463 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 77463 00:38:37.806 01:57:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 77463 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 77463 ']' 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 77463 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 77463 ']' 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 77463 00:38:37.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (77463) - No such process 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 77463 is not found' 00:38:37.806 Process with pid 77463 is not found 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:37.806 00:38:37.806 real 0m16.034s 00:38:37.806 user 0m33.748s 00:38:37.806 sys 0m0.766s 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:37.806 01:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.806 ************************************ 00:38:37.806 END TEST spdkcli_nvmf_tcp 00:38:37.806 ************************************ 00:38:37.806 01:57:04 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:37.806 01:57:04 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:37.806 01:57:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:37.806 01:57:04 -- common/autotest_common.sh@10 -- # set +x 00:38:38.066 ************************************ 00:38:38.066 START TEST nvmf_identify_passthru 00:38:38.066 ************************************ 00:38:38.066 01:57:04 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:38.066 * Looking for test storage... 00:38:38.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.066 01:57:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.066 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.066 01:57:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.066 01:57:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.066 01:57:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.066 01:57:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:38.067 01:57:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.067 01:57:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.067 01:57:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.067 01:57:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:38.067 01:57:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.067 01:57:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.067 01:57:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:38.067 01:57:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:38.067 01:57:04 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:38:38.067 01:57:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.199 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:46.200 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:46.200 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:46.200 Found net devices under 0000:31:00.0: cvl_0_0 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:46.200 Found net devices under 0000:31:00.1: cvl_0_1 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.200 01:57:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:46.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.748 ms 00:38:46.200 00:38:46.200 --- 10.0.0.2 ping statistics --- 00:38:46.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.200 rtt min/avg/max/mdev = 0.748/0.748/0.748/0.000 ms 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:46.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:38:46.200 00:38:46.200 --- 10.0.0.1 ping statistics --- 00:38:46.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.200 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:46.200 01:57:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:38:46.200 01:57:12 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:46.200 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:46.200 EAL: No free 2048 kB hugepages reported on node 1 00:38:46.461 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:38:46.461 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:46.461 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:46.461 01:57:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:46.461 EAL: No free 2048 kB hugepages reported on node 1 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=85438 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 85438 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 85438 ']' 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:47.030 01:57:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.030 01:57:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:47.030 [2024-07-12 01:57:13.283051] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:47.030 [2024-07-12 01:57:13.283102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.030 EAL: No free 2048 kB hugepages reported on node 1 00:38:47.030 [2024-07-12 01:57:13.355358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:47.291 [2024-07-12 01:57:13.387764] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.291 [2024-07-12 01:57:13.387801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.291 [2024-07-12 01:57:13.387809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.291 [2024-07-12 01:57:13.387815] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.291 [2024-07-12 01:57:13.387821] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.291 [2024-07-12 01:57:13.387962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.291 [2024-07-12 01:57:13.388078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:47.291 [2024-07-12 01:57:13.388255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:47.291 [2024-07-12 01:57:13.388258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:38:47.861 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.861 INFO: Log level set to 20 00:38:47.861 INFO: Requests: 00:38:47.861 { 00:38:47.861 "jsonrpc": "2.0", 00:38:47.861 "method": "nvmf_set_config", 00:38:47.861 "id": 1, 00:38:47.861 "params": { 00:38:47.861 "admin_cmd_passthru": { 00:38:47.861 "identify_ctrlr": true 00:38:47.861 } 00:38:47.861 } 00:38:47.861 } 00:38:47.861 00:38:47.861 INFO: response: 00:38:47.861 { 00:38:47.861 "jsonrpc": "2.0", 00:38:47.861 "id": 1, 00:38:47.861 "result": true 00:38:47.861 } 00:38:47.861 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.861 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.861 INFO: Setting log level to 20 00:38:47.861 INFO: Setting log level to 20 00:38:47.861 INFO: Log level set to 20 00:38:47.861 INFO: Log level set to 20 00:38:47.861 INFO: Requests: 00:38:47.861 { 00:38:47.861 "jsonrpc": "2.0", 00:38:47.861 "method": "framework_start_init", 00:38:47.861 "id": 1 00:38:47.861 } 00:38:47.861 00:38:47.861 INFO: Requests: 00:38:47.861 { 00:38:47.861 "jsonrpc": "2.0", 00:38:47.861 "method": "framework_start_init", 00:38:47.861 "id": 1 00:38:47.861 } 00:38:47.861 00:38:47.861 [2024-07-12 01:57:14.127651] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:47.861 INFO: response: 00:38:47.861 { 00:38:47.861 "jsonrpc": "2.0", 00:38:47.861 "id": 1, 00:38:47.861 "result": true 00:38:47.861 } 00:38:47.861 00:38:47.861 INFO: response: 00:38:47.861 { 00:38:47.861 "jsonrpc": "2.0", 00:38:47.861 "id": 1, 00:38:47.861 "result": true 00:38:47.861 } 00:38:47.861 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.861 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.861 INFO: Setting log level to 40 00:38:47.861 INFO: Setting log level to 40 00:38:47.861 INFO: Setting log level to 40 00:38:47.861 [2024-07-12 01:57:14.140873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.861 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:47.861 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.861 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:48.433 Nvme0n1 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.433 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.433 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.433 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:48.433 [2024-07-12 01:57:14.524552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.433 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.433 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:48.433 [ 00:38:48.433 { 00:38:48.433 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:48.433 "subtype": "Discovery", 00:38:48.433 "listen_addresses": [], 00:38:48.433 "allow_any_host": true, 00:38:48.433 "hosts": [] 00:38:48.433 }, 00:38:48.433 { 00:38:48.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.433 "subtype": "NVMe", 00:38:48.433 "listen_addresses": [ 00:38:48.433 { 00:38:48.433 "trtype": "TCP", 00:38:48.433 "adrfam": "IPv4", 00:38:48.433 "traddr": "10.0.0.2", 00:38:48.433 "trsvcid": "4420" 00:38:48.433 } 00:38:48.433 ], 00:38:48.433 "allow_any_host": true, 00:38:48.433 "hosts": [], 00:38:48.433 "serial_number": "SPDK00000000000001", 00:38:48.433 "model_number": "SPDK bdev Controller", 00:38:48.433 "max_namespaces": 1, 00:38:48.433 "min_cntlid": 1, 00:38:48.433 "max_cntlid": 65519, 00:38:48.433 "namespaces": [ 00:38:48.433 { 00:38:48.433 "nsid": 1, 00:38:48.433 "bdev_name": "Nvme0n1", 00:38:48.433 "name": "Nvme0n1", 00:38:48.433 "nguid": "3634473052605494002538450000002B", 00:38:48.433 "uuid": "36344730-5260-5494-0025-38450000002b" 00:38:48.434 } 00:38:48.434 ] 00:38:48.434 } 00:38:48.434 ] 00:38:48.434 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:48.434 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:48.434 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:48.434 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.707 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:48.707 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:38:48.707 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:48.707 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:48.707 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.707 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:48.707 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.707 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:48.707 01:57:14 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:48.707 rmmod nvme_tcp 00:38:48.707 rmmod nvme_fabrics 00:38:48.707 rmmod nvme_keyring 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 85438 ']' 00:38:48.707 01:57:14 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 85438 00:38:48.707 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 85438 ']' 00:38:48.707 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 85438 00:38:48.707 01:57:14 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85438 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85438' 00:38:48.707 killing process with pid 85438 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 85438 00:38:48.707 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 85438 00:38:48.967 01:57:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:48.967 01:57:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:48.967 01:57:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:48.967 01:57:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:48.968 01:57:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:48.968 01:57:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.968 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:48.968 01:57:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.512 01:57:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:51.512 00:38:51.512 real 0m13.209s 00:38:51.512 user 0m10.171s 00:38:51.512 sys 0m6.482s 00:38:51.512 01:57:17 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:51.512 01:57:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:51.512 ************************************ 00:38:51.512 END TEST nvmf_identify_passthru 00:38:51.512 ************************************ 00:38:51.512 01:57:17 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:51.512 01:57:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:51.512 01:57:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:51.512 01:57:17 -- common/autotest_common.sh@10 -- # set +x 00:38:51.512 ************************************ 00:38:51.512 START TEST nvmf_dif 00:38:51.512 ************************************ 00:38:51.512 01:57:17 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:51.512 * Looking for test storage... 00:38:51.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:51.512 01:57:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:51.512 01:57:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:51.512 01:57:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:51.512 01:57:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:51.512 01:57:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.512 01:57:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.512 01:57:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.512 01:57:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:51.512 01:57:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:51.512 01:57:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:51.512 01:57:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:51.512 01:57:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:51.512 01:57:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:51.512 01:57:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.512 01:57:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:51.512 01:57:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:51.512 01:57:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:51.512 01:57:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:59.647 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:59.647 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:59.647 Found net devices under 0000:31:00.0: cvl_0_0 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:59.647 Found net devices under 0000:31:00.1: cvl_0_1 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:59.647 01:57:24 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:59.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:38:59.647 00:38:59.647 --- 10.0.0.2 ping statistics --- 00:38:59.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.647 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:38:59.647 00:38:59.647 --- 10.0.0.1 ping statistics --- 00:38:59.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.647 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:59.647 01:57:25 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:02.945 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:39:02.945 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:02.945 01:57:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:02.945 01:57:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=91967 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 91967 00:39:02.945 01:57:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 91967 ']' 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:02.945 01:57:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:02.945 [2024-07-12 01:57:28.944774] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:02.945 [2024-07-12 01:57:28.944835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.945 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.945 [2024-07-12 01:57:29.024543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.945 [2024-07-12 01:57:29.062966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.945 [2024-07-12 01:57:29.063011] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.945 [2024-07-12 01:57:29.063019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.945 [2024-07-12 01:57:29.063025] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.945 [2024-07-12 01:57:29.063031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.945 [2024-07-12 01:57:29.063052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:39:03.514 01:57:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 01:57:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.514 01:57:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:03.514 01:57:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 [2024-07-12 01:57:29.737140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.514 01:57:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 ************************************ 00:39:03.514 START TEST fio_dif_1_default 00:39:03.514 ************************************ 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 bdev_null0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:03.514 [2024-07-12 01:57:29.817489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:03.514 { 00:39:03.514 "params": { 00:39:03.514 "name": "Nvme$subsystem", 00:39:03.514 "trtype": "$TEST_TRANSPORT", 00:39:03.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:03.514 "adrfam": "ipv4", 00:39:03.514 "trsvcid": "$NVMF_PORT", 00:39:03.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:03.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:03.514 "hdgst": ${hdgst:-false}, 00:39:03.514 "ddgst": ${ddgst:-false} 00:39:03.514 }, 00:39:03.514 "method": "bdev_nvme_attach_controller" 00:39:03.514 } 00:39:03.514 EOF 00:39:03.514 )") 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:03.514 "params": { 00:39:03.514 "name": "Nvme0", 00:39:03.514 "trtype": "tcp", 00:39:03.514 "traddr": "10.0.0.2", 00:39:03.514 "adrfam": "ipv4", 00:39:03.514 "trsvcid": "4420", 00:39:03.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:03.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:03.514 "hdgst": false, 00:39:03.514 "ddgst": false 00:39:03.514 }, 00:39:03.514 "method": "bdev_nvme_attach_controller" 00:39:03.514 }' 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:03.514 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:03.798 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:03.798 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:03.798 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:03.798 01:57:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.065 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:04.065 fio-3.35 00:39:04.065 Starting 1 thread 00:39:04.065 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.353 00:39:16.353 filename0: (groupid=0, jobs=1): err= 0: pid=92446: Fri Jul 12 01:57:40 2024 00:39:16.353 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10016msec) 00:39:16.353 slat (nsec): min=5410, max=37018, avg=6205.15, stdev=1543.95 00:39:16.353 clat (usec): min=576, max=42343, avg=21069.21, stdev=20193.87 00:39:16.353 lat (usec): min=581, max=42351, avg=21075.41, stdev=20193.85 00:39:16.353 clat percentiles (usec): 00:39:16.353 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 824], 00:39:16.353 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:39:16.353 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:16.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:16.353 | 99.99th=[42206] 00:39:16.353 bw ( KiB/s): min= 704, max= 768, per=99.90%, avg=758.40, stdev=21.02, samples=20 00:39:16.353 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:39:16.353 lat (usec) : 750=3.42%, 1000=43.37% 00:39:16.353 lat (msec) : 2=3.11%, 50=50.11% 00:39:16.353 cpu : usr=95.46%, sys=4.33%, ctx=14, majf=0, minf=245 00:39:16.353 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.353 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.353 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:16.353 00:39:16.353 Run status group 0 (all jobs): 00:39:16.353 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7600KiB (7782kB), run=10016-10016msec 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.353 00:39:16.353 real 0m11.035s 00:39:16.353 user 0m25.837s 00:39:16.353 sys 0m0.737s 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 ************************************ 00:39:16.353 END TEST fio_dif_1_default 00:39:16.353 ************************************ 00:39:16.353 01:57:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:16.353 01:57:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:16.353 01:57:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 ************************************ 00:39:16.353 START TEST fio_dif_1_multi_subsystems 00:39:16.353 ************************************ 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 bdev_null0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.353 [2024-07-12 01:57:40.929497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:16.353 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.354 bdev_null1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:16.354 { 00:39:16.354 "params": { 00:39:16.354 "name": "Nvme$subsystem", 00:39:16.354 "trtype": "$TEST_TRANSPORT", 00:39:16.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:16.354 "adrfam": "ipv4", 00:39:16.354 "trsvcid": "$NVMF_PORT", 00:39:16.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:16.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:16.354 "hdgst": ${hdgst:-false}, 00:39:16.354 "ddgst": ${ddgst:-false} 00:39:16.354 }, 00:39:16.354 "method": "bdev_nvme_attach_controller" 00:39:16.354 } 00:39:16.354 EOF 00:39:16.354 )") 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:16.354 { 00:39:16.354 "params": { 00:39:16.354 "name": "Nvme$subsystem", 00:39:16.354 "trtype": "$TEST_TRANSPORT", 00:39:16.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:16.354 "adrfam": "ipv4", 00:39:16.354 "trsvcid": "$NVMF_PORT", 00:39:16.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:16.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:16.354 "hdgst": ${hdgst:-false}, 00:39:16.354 "ddgst": ${ddgst:-false} 00:39:16.354 }, 00:39:16.354 "method": "bdev_nvme_attach_controller" 00:39:16.354 } 00:39:16.354 EOF 00:39:16.354 )") 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:39:16.354 01:57:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:16.354 "params": { 00:39:16.354 "name": "Nvme0", 00:39:16.354 "trtype": "tcp", 00:39:16.354 "traddr": "10.0.0.2", 00:39:16.354 "adrfam": "ipv4", 00:39:16.354 "trsvcid": "4420", 00:39:16.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:16.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:16.354 "hdgst": false, 00:39:16.354 "ddgst": false 00:39:16.354 }, 00:39:16.354 "method": "bdev_nvme_attach_controller" 00:39:16.354 },{ 00:39:16.354 "params": { 00:39:16.354 "name": "Nvme1", 00:39:16.354 "trtype": "tcp", 00:39:16.354 "traddr": "10.0.0.2", 00:39:16.354 "adrfam": "ipv4", 00:39:16.354 "trsvcid": "4420", 00:39:16.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:16.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:16.354 "hdgst": false, 00:39:16.354 "ddgst": false 00:39:16.354 }, 00:39:16.354 "method": "bdev_nvme_attach_controller" 00:39:16.354 }' 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:16.354 01:57:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:16.354 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:16.354 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:16.354 fio-3.35 00:39:16.354 Starting 2 threads 00:39:16.354 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.373 00:39:26.373 filename0: (groupid=0, jobs=1): err= 0: pid=94832: Fri Jul 12 01:57:52 2024 00:39:26.373 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10038msec) 00:39:26.373 slat (nsec): min=5430, max=28689, avg=6310.81, stdev=1425.48 00:39:26.373 clat (usec): min=492, max=41809, avg=21070.08, stdev=20229.51 00:39:26.373 lat (usec): min=498, max=41815, avg=21076.39, stdev=20229.58 00:39:26.373 clat percentiles (usec): 00:39:26.373 | 1.00th=[ 668], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 725], 00:39:26.373 | 30.00th=[ 750], 40.00th=[ 840], 50.00th=[41157], 60.00th=[41157], 00:39:26.373 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:26.373 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:26.373 | 99.99th=[41681] 00:39:26.373 bw ( KiB/s): min= 672, max= 768, per=66.69%, avg=760.00, stdev=25.16, samples=20 00:39:26.373 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:39:26.373 lat (usec) : 500=0.11%, 750=29.62%, 1000=19.85% 00:39:26.373 lat (msec) : 2=0.21%, 50=50.21% 00:39:26.373 cpu : usr=97.15%, sys=2.65%, ctx=12, majf=0, minf=175 00:39:26.373 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.373 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.373 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:26.373 filename1: (groupid=0, jobs=1): err= 0: pid=94833: Fri Jul 12 01:57:52 2024 00:39:26.373 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10032msec) 00:39:26.373 slat (nsec): min=5428, max=30644, avg=6346.94, stdev=1408.53 00:39:26.373 clat (usec): min=40949, max=42917, avg=41955.86, stdev=237.13 00:39:26.373 lat (usec): min=40954, max=42922, avg=41962.21, stdev=237.27 00:39:26.373 clat percentiles (usec): 00:39:26.373 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:39:26.373 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:26.373 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:26.373 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:39:26.373 | 99.99th=[42730] 00:39:26.373 bw ( KiB/s): min= 352, max= 384, per=33.34%, avg=380.80, stdev= 9.85, samples=20 00:39:26.373 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:39:26.373 lat (msec) : 50=100.00% 00:39:26.373 cpu : usr=97.14%, sys=2.65%, ctx=13, majf=0, minf=60 00:39:26.373 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.373 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.373 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:26.373 00:39:26.373 Run status group 0 (all jobs): 00:39:26.373 READ: bw=1140KiB/s (1167kB/s), 381KiB/s-759KiB/s (390kB/s-777kB/s), io=11.2MiB (11.7MB), run=10032-10038msec 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.373 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 00:39:26.374 real 0m11.395s 00:39:26.374 user 0m36.169s 00:39:26.374 sys 0m0.796s 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 ************************************ 00:39:26.374 END TEST fio_dif_1_multi_subsystems 00:39:26.374 ************************************ 00:39:26.374 01:57:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:26.374 01:57:52 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:26.374 01:57:52 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 ************************************ 00:39:26.374 START TEST fio_dif_rand_params 00:39:26.374 ************************************ 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 bdev_null0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:26.374 [2024-07-12 01:57:52.401926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:26.374 { 00:39:26.374 "params": { 00:39:26.374 "name": "Nvme$subsystem", 00:39:26.374 "trtype": "$TEST_TRANSPORT", 00:39:26.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:26.374 "adrfam": "ipv4", 00:39:26.374 "trsvcid": "$NVMF_PORT", 00:39:26.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:26.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:26.374 "hdgst": ${hdgst:-false}, 00:39:26.374 "ddgst": ${ddgst:-false} 00:39:26.374 }, 00:39:26.374 "method": "bdev_nvme_attach_controller" 00:39:26.374 } 00:39:26.374 EOF 00:39:26.374 )") 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:26.374 "params": { 00:39:26.374 "name": "Nvme0", 00:39:26.374 "trtype": "tcp", 00:39:26.374 "traddr": "10.0.0.2", 00:39:26.374 "adrfam": "ipv4", 00:39:26.374 "trsvcid": "4420", 00:39:26.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:26.374 "hdgst": false, 00:39:26.374 "ddgst": false 00:39:26.374 }, 00:39:26.374 "method": "bdev_nvme_attach_controller" 00:39:26.374 }' 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:26.374 01:57:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.639 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:26.639 ... 00:39:26.639 fio-3.35 00:39:26.639 Starting 3 threads 00:39:26.639 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.218 00:39:33.218 filename0: (groupid=0, jobs=1): err= 0: pid=97020: Fri Jul 12 01:57:58 2024 00:39:33.218 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(131MiB/5037msec) 00:39:33.218 slat (nsec): min=5465, max=40989, avg=7616.44, stdev=2247.89 00:39:33.218 clat (usec): min=5617, max=90719, avg=14406.28, stdev=12577.96 00:39:33.218 lat (usec): min=5625, max=90728, avg=14413.89, stdev=12577.88 00:39:33.218 clat percentiles (usec): 00:39:33.218 | 1.00th=[ 5997], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 8455], 00:39:33.218 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:39:33.218 | 70.00th=[12256], 80.00th=[13173], 90.00th=[15664], 95.00th=[50594], 00:39:33.219 | 99.00th=[54789], 99.50th=[56886], 99.90th=[90702], 99.95th=[90702], 00:39:33.219 | 99.99th=[90702] 00:39:33.219 bw ( KiB/s): min=23040, max=29696, per=32.63%, avg=26752.00, stdev=2396.18, samples=10 00:39:33.219 iops : min= 180, max= 232, avg=209.00, stdev=18.72, samples=10 00:39:33.219 lat (msec) : 10=37.88%, 20=52.67%, 50=3.44%, 100=6.01% 00:39:33.219 cpu : usr=95.00%, sys=4.73%, ctx=13, majf=0, minf=127 00:39:33.219 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=1048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=97021: Fri Jul 12 01:57:58 2024 00:39:33.219 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(129MiB/5045msec) 00:39:33.219 slat (nsec): min=5415, max=32160, avg=6609.02, stdev=1480.80 00:39:33.219 clat (usec): min=5532, max=92165, avg=14600.15, stdev=12097.83 00:39:33.219 lat (usec): min=5538, max=92171, avg=14606.76, stdev=12097.86 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 7504], 20.00th=[ 8717], 00:39:33.219 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11207], 60.00th=[12125], 00:39:33.219 | 70.00th=[12911], 80.00th=[14091], 90.00th=[16319], 95.00th=[50594], 00:39:33.219 | 99.00th=[54264], 99.50th=[56886], 99.90th=[90702], 99.95th=[91751], 00:39:33.219 | 99.99th=[91751] 00:39:33.219 bw ( KiB/s): min=13056, max=32256, per=32.20%, avg=26393.60, stdev=5998.21, samples=10 00:39:33.219 iops : min= 102, max= 252, avg=206.20, stdev=46.86, samples=10 00:39:33.219 lat (msec) : 10=33.30%, 20=57.70%, 50=3.48%, 100=5.52% 00:39:33.219 cpu : usr=95.70%, sys=4.04%, ctx=17, majf=0, minf=80 00:39:33.219 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=1033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.219 filename0: (groupid=0, jobs=1): err= 0: pid=97022: Fri Jul 12 01:57:58 2024 00:39:33.219 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5004msec) 00:39:33.219 slat (nsec): min=5453, max=33939, avg=7390.62, stdev=1745.79 00:39:33.219 clat (usec): min=4739, max=91097, avg=13042.63, stdev=10443.69 00:39:33.219 lat (usec): min=4748, max=91103, avg=13050.02, stdev=10443.64 00:39:33.219 clat percentiles (usec): 00:39:33.219 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 8356], 00:39:33.219 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10683], 60.00th=[11207], 00:39:33.219 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14615], 95.00th=[49021], 00:39:33.219 | 99.00th=[52167], 99.50th=[54264], 99.90th=[89654], 99.95th=[90702], 00:39:33.219 | 99.99th=[90702] 00:39:33.219 bw ( KiB/s): min=23296, max=36864, per=35.85%, avg=29388.80, stdev=4858.46, samples=10 00:39:33.219 iops : min= 182, max= 288, avg=229.60, stdev=37.96, samples=10 00:39:33.219 lat (msec) : 10=39.22%, 20=54.43%, 50=2.96%, 100=3.39% 00:39:33.219 cpu : usr=95.18%, sys=4.54%, ctx=13, majf=0, minf=95 00:39:33.219 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.219 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.219 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.219 00:39:33.219 Run status group 0 (all jobs): 00:39:33.219 READ: bw=80.1MiB/s (83.9MB/s), 25.6MiB/s-28.7MiB/s (26.8MB/s-30.1MB/s), io=404MiB (423MB), run=5004-5045msec 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 bdev_null0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 [2024-07-12 01:57:58.519624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 bdev_null1 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 bdev_null2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:33.219 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:33.220 { 00:39:33.220 "params": { 00:39:33.220 "name": "Nvme$subsystem", 00:39:33.220 "trtype": "$TEST_TRANSPORT", 00:39:33.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.220 "adrfam": "ipv4", 00:39:33.220 "trsvcid": "$NVMF_PORT", 00:39:33.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.220 "hdgst": ${hdgst:-false}, 00:39:33.220 "ddgst": ${ddgst:-false} 00:39:33.220 }, 00:39:33.220 "method": "bdev_nvme_attach_controller" 00:39:33.220 } 00:39:33.220 EOF 00:39:33.220 )") 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:33.220 { 00:39:33.220 "params": { 00:39:33.220 "name": "Nvme$subsystem", 00:39:33.220 "trtype": "$TEST_TRANSPORT", 00:39:33.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.220 "adrfam": "ipv4", 00:39:33.220 "trsvcid": "$NVMF_PORT", 00:39:33.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.220 "hdgst": ${hdgst:-false}, 00:39:33.220 "ddgst": ${ddgst:-false} 00:39:33.220 }, 00:39:33.220 "method": "bdev_nvme_attach_controller" 00:39:33.220 } 00:39:33.220 EOF 00:39:33.220 )") 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:33.220 { 00:39:33.220 "params": { 00:39:33.220 "name": "Nvme$subsystem", 00:39:33.220 "trtype": "$TEST_TRANSPORT", 00:39:33.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.220 "adrfam": "ipv4", 00:39:33.220 "trsvcid": "$NVMF_PORT", 00:39:33.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.220 "hdgst": ${hdgst:-false}, 00:39:33.220 "ddgst": ${ddgst:-false} 00:39:33.220 }, 00:39:33.220 "method": "bdev_nvme_attach_controller" 00:39:33.220 } 00:39:33.220 EOF 00:39:33.220 )") 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:33.220 "params": { 00:39:33.220 "name": "Nvme0", 00:39:33.220 "trtype": "tcp", 00:39:33.220 "traddr": "10.0.0.2", 00:39:33.220 "adrfam": "ipv4", 00:39:33.220 "trsvcid": "4420", 00:39:33.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:33.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:33.220 "hdgst": false, 00:39:33.220 "ddgst": false 00:39:33.220 }, 00:39:33.220 "method": "bdev_nvme_attach_controller" 00:39:33.220 },{ 00:39:33.220 "params": { 00:39:33.220 "name": "Nvme1", 00:39:33.220 "trtype": "tcp", 00:39:33.220 "traddr": "10.0.0.2", 00:39:33.220 "adrfam": "ipv4", 00:39:33.220 "trsvcid": "4420", 00:39:33.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:33.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:33.220 "hdgst": false, 00:39:33.220 "ddgst": false 00:39:33.220 }, 00:39:33.220 "method": "bdev_nvme_attach_controller" 00:39:33.220 },{ 00:39:33.220 "params": { 00:39:33.220 "name": "Nvme2", 00:39:33.220 "trtype": "tcp", 00:39:33.220 "traddr": "10.0.0.2", 00:39:33.220 "adrfam": "ipv4", 00:39:33.220 "trsvcid": "4420", 00:39:33.220 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:33.220 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:33.220 "hdgst": false, 00:39:33.220 "ddgst": false 00:39:33.220 }, 00:39:33.220 "method": "bdev_nvme_attach_controller" 00:39:33.220 }' 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:33.220 01:57:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:33.220 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:33.220 ... 00:39:33.220 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:33.220 ... 00:39:33.220 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:33.220 ... 00:39:33.220 fio-3.35 00:39:33.220 Starting 24 threads 00:39:33.220 EAL: No free 2048 kB hugepages reported on node 1 00:39:45.449 00:39:45.449 filename0: (groupid=0, jobs=1): err= 0: pid=98520: Fri Jul 12 01:58:10 2024 00:39:45.449 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.5MiB/10042msec) 00:39:45.449 slat (nsec): min=5597, max=82236, avg=16322.14, stdev=11734.60 00:39:45.449 clat (usec): min=1653, max=53073, avg=32008.60, stdev=4740.33 00:39:45.449 lat (usec): min=1673, max=53097, avg=32024.92, stdev=4740.07 00:39:45.449 clat percentiles (usec): 00:39:45.449 | 1.00th=[ 5276], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:39:45.449 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.449 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:39:45.449 | 99.00th=[46400], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:39:45.449 | 99.99th=[53216] 00:39:45.449 bw ( KiB/s): min= 1916, max= 2688, per=4.15%, avg=1989.80, stdev=173.32, samples=20 00:39:45.449 iops : min= 479, max= 672, avg=497.45, stdev=43.33, samples=20 00:39:45.449 lat (msec) : 2=0.56%, 4=0.24%, 10=1.12%, 20=1.52%, 50=96.51% 00:39:45.449 lat (msec) : 100=0.04% 00:39:45.449 cpu : usr=99.05%, sys=0.67%, ctx=22, majf=0, minf=35 00:39:45.449 IO depths : 1=5.5%, 2=11.7%, 4=24.5%, 8=51.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.449 filename0: (groupid=0, jobs=1): err= 0: pid=98521: Fri Jul 12 01:58:10 2024 00:39:45.449 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:39:45.449 slat (nsec): min=5592, max=76964, avg=13835.64, stdev=10737.20 00:39:45.449 clat (usec): min=14164, max=35006, avg=32579.58, stdev=1331.88 00:39:45.449 lat (usec): min=14181, max=35013, avg=32593.41, stdev=1329.79 00:39:45.449 clat percentiles (usec): 00:39:45.449 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:39:45.449 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:39:45.449 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:39:45.449 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:39:45.449 | 99.99th=[34866] 00:39:45.449 bw ( KiB/s): min= 1916, max= 2048, per=4.09%, avg=1959.95, stdev=60.89, samples=19 00:39:45.449 iops : min= 479, max= 512, avg=489.95, stdev=15.17, samples=19 00:39:45.449 lat (msec) : 20=0.33%, 50=99.67% 00:39:45.449 cpu : usr=99.18%, sys=0.54%, ctx=18, majf=0, minf=29 00:39:45.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.449 filename0: (groupid=0, jobs=1): err= 0: pid=98522: Fri Jul 12 01:58:10 2024 00:39:45.449 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10015msec) 00:39:45.449 slat (nsec): min=5867, max=98855, avg=26117.45, stdev=15109.29 00:39:45.449 clat (usec): min=24658, max=51481, avg=32604.10, stdev=1239.93 00:39:45.449 lat (usec): min=24681, max=51509, avg=32630.22, stdev=1238.44 00:39:45.449 clat percentiles (usec): 00:39:45.449 | 1.00th=[31851], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:39:45.449 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.449 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.449 | 99.00th=[33817], 99.50th=[34341], 99.90th=[51643], 99.95th=[51643], 00:39:45.449 | 99.99th=[51643] 00:39:45.449 bw ( KiB/s): min= 1795, max= 2048, per=4.06%, avg=1945.55, stdev=66.69, samples=20 00:39:45.449 iops : min= 448, max= 512, avg=486.35, stdev=16.76, samples=20 00:39:45.449 lat (msec) : 50=99.67%, 100=0.33% 00:39:45.449 cpu : usr=99.13%, sys=0.60%, ctx=10, majf=0, minf=17 00:39:45.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.449 filename0: (groupid=0, jobs=1): err= 0: pid=98523: Fri Jul 12 01:58:10 2024 00:39:45.449 read: IOPS=485, BW=1944KiB/s (1990kB/s)(19.1MiB/10042msec) 00:39:45.449 slat (nsec): min=5633, max=93382, avg=24666.26, stdev=14422.92 00:39:45.449 clat (usec): min=13566, max=56639, avg=32662.09, stdev=1917.69 00:39:45.449 lat (usec): min=13601, max=56655, avg=32686.75, stdev=1916.53 00:39:45.449 clat percentiles (usec): 00:39:45.449 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:39:45.449 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.449 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.449 | 99.00th=[34341], 99.50th=[53740], 99.90th=[56361], 99.95th=[56886], 00:39:45.449 | 99.99th=[56886] 00:39:45.449 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1943.05, stdev=68.79, samples=20 00:39:45.449 iops : min= 448, max= 512, avg=485.75, stdev=17.21, samples=20 00:39:45.449 lat (msec) : 20=0.08%, 50=99.34%, 100=0.57% 00:39:45.449 cpu : usr=98.92%, sys=0.66%, ctx=156, majf=0, minf=30 00:39:45.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.449 filename0: (groupid=0, jobs=1): err= 0: pid=98524: Fri Jul 12 01:58:10 2024 00:39:45.449 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:39:45.449 slat (nsec): min=5646, max=82341, avg=21180.93, stdev=13301.93 00:39:45.449 clat (usec): min=14627, max=49064, avg=32506.55, stdev=1510.20 00:39:45.449 lat (usec): min=14656, max=49085, avg=32527.73, stdev=1509.73 00:39:45.449 clat percentiles (usec): 00:39:45.449 | 1.00th=[27657], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:39:45.449 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.449 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.449 | 99.00th=[34341], 99.50th=[34341], 99.90th=[46400], 99.95th=[47449], 00:39:45.449 | 99.99th=[49021] 00:39:45.449 bw ( KiB/s): min= 1916, max= 2048, per=4.09%, avg=1959.95, stdev=59.24, samples=19 00:39:45.449 iops : min= 479, max= 512, avg=489.95, stdev=14.75, samples=19 00:39:45.449 lat (msec) : 20=0.49%, 50=99.51% 00:39:45.449 cpu : usr=97.95%, sys=1.23%, ctx=910, majf=0, minf=36 00:39:45.449 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.449 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename0: (groupid=0, jobs=1): err= 0: pid=98525: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:39:45.450 slat (nsec): min=5509, max=92747, avg=24054.39, stdev=14580.21 00:39:45.450 clat (usec): min=12655, max=58633, avg=32604.19, stdev=2294.39 00:39:45.450 lat (usec): min=12660, max=58650, avg=32628.24, stdev=2293.66 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[25035], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:39:45.450 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.450 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.450 | 99.00th=[40109], 99.50th=[42206], 99.90th=[58459], 99.95th=[58459], 00:39:45.450 | 99.99th=[58459] 00:39:45.450 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1946.53, stdev=66.17, samples=19 00:39:45.450 iops : min= 448, max= 512, avg=486.63, stdev=16.54, samples=19 00:39:45.450 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.450 cpu : usr=98.95%, sys=0.70%, ctx=43, majf=0, minf=28 00:39:45.450 IO depths : 1=5.2%, 2=11.2%, 4=24.6%, 8=51.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename0: (groupid=0, jobs=1): err= 0: pid=98526: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=518, BW=2074KiB/s (2123kB/s)(20.3MiB/10012msec) 00:39:45.450 slat (nsec): min=5574, max=92621, avg=12626.74, stdev=10622.32 00:39:45.450 clat (usec): min=5840, max=57861, avg=30769.40, stdev=5499.88 00:39:45.450 lat (usec): min=5857, max=57901, avg=30782.03, stdev=5501.60 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[14484], 5.00th=[20579], 10.00th=[22414], 20.00th=[27657], 00:39:45.450 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.450 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[34866], 00:39:45.450 | 99.00th=[46924], 99.50th=[51119], 99.90th=[57934], 99.95th=[57934], 00:39:45.450 | 99.99th=[57934] 00:39:45.450 bw ( KiB/s): min= 1916, max= 2400, per=4.33%, avg=2076.47, stdev=170.39, samples=19 00:39:45.450 iops : min= 479, max= 600, avg=519.00, stdev=42.42, samples=19 00:39:45.450 lat (msec) : 10=0.31%, 20=3.12%, 50=95.99%, 100=0.58% 00:39:45.450 cpu : usr=98.30%, sys=0.95%, ctx=110, majf=0, minf=29 00:39:45.450 IO depths : 1=3.9%, 2=8.1%, 4=19.1%, 8=60.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename0: (groupid=0, jobs=1): err= 0: pid=98527: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10018msec) 00:39:45.450 slat (nsec): min=5579, max=67213, avg=14199.62, stdev=9744.52 00:39:45.450 clat (usec): min=21078, max=47367, avg=32669.77, stdev=2300.37 00:39:45.450 lat (usec): min=21088, max=47374, avg=32683.97, stdev=2300.33 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:39:45.450 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:39:45.450 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:39:45.450 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46924], 99.95th=[47449], 00:39:45.450 | 99.99th=[47449] 00:39:45.450 bw ( KiB/s): min= 1916, max= 2048, per=4.07%, avg=1952.75, stdev=54.79, samples=20 00:39:45.450 iops : min= 479, max= 512, avg=488.15, stdev=13.65, samples=20 00:39:45.450 lat (msec) : 50=100.00% 00:39:45.450 cpu : usr=98.33%, sys=0.94%, ctx=110, majf=0, minf=26 00:39:45.450 IO depths : 1=4.9%, 2=10.9%, 4=24.4%, 8=52.2%, 16=7.6%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename1: (groupid=0, jobs=1): err= 0: pid=98528: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:39:45.450 slat (nsec): min=5653, max=82620, avg=23285.63, stdev=12704.27 00:39:45.450 clat (usec): min=13500, max=56140, avg=32567.31, stdev=1826.90 00:39:45.450 lat (usec): min=13524, max=56160, avg=32590.59, stdev=1826.36 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:39:45.450 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.450 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.450 | 99.00th=[33817], 99.50th=[34341], 99.90th=[55837], 99.95th=[56361], 00:39:45.450 | 99.99th=[56361] 00:39:45.450 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1946.74, stdev=68.61, samples=19 00:39:45.450 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:39:45.450 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.450 cpu : usr=98.98%, sys=0.67%, ctx=119, majf=0, minf=32 00:39:45.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename1: (groupid=0, jobs=1): err= 0: pid=98529: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=486, BW=1948KiB/s (1994kB/s)(19.0MiB/10002msec) 00:39:45.450 slat (nsec): min=5685, max=56728, avg=11248.41, stdev=7834.59 00:39:45.450 clat (usec): min=20866, max=68030, avg=32768.17, stdev=1864.14 00:39:45.450 lat (usec): min=20875, max=68062, avg=32779.42, stdev=1864.33 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:39:45.450 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:39:45.450 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:39:45.450 | 99.00th=[34341], 99.50th=[44827], 99.90th=[55837], 99.95th=[56361], 00:39:45.450 | 99.99th=[67634] 00:39:45.450 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1949.26, stdev=68.46, samples=19 00:39:45.450 iops : min= 448, max= 512, avg=487.32, stdev=17.11, samples=19 00:39:45.450 lat (msec) : 50=99.79%, 100=0.21% 00:39:45.450 cpu : usr=99.22%, sys=0.50%, ctx=18, majf=0, minf=26 00:39:45.450 IO depths : 1=5.2%, 2=11.4%, 4=24.8%, 8=51.3%, 16=7.4%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename1: (groupid=0, jobs=1): err= 0: pid=98530: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10018msec) 00:39:45.450 slat (nsec): min=5583, max=74063, avg=9410.18, stdev=6671.18 00:39:45.450 clat (usec): min=24219, max=47883, avg=32725.45, stdev=1139.47 00:39:45.450 lat (usec): min=24227, max=47891, avg=32734.86, stdev=1138.97 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:39:45.450 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:39:45.450 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33424], 00:39:45.450 | 99.00th=[34341], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:39:45.450 | 99.99th=[47973] 00:39:45.450 bw ( KiB/s): min= 1856, max= 2048, per=4.06%, avg=1947.20, stdev=61.38, samples=20 00:39:45.450 iops : min= 464, max= 512, avg=486.80, stdev=15.34, samples=20 00:39:45.450 lat (msec) : 50=100.00% 00:39:45.450 cpu : usr=99.12%, sys=0.58%, ctx=65, majf=0, minf=40 00:39:45.450 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename1: (groupid=0, jobs=1): err= 0: pid=98531: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10018msec) 00:39:45.450 slat (nsec): min=5595, max=92852, avg=9262.79, stdev=6050.27 00:39:45.450 clat (usec): min=5834, max=54485, avg=32447.55, stdev=2562.93 00:39:45.450 lat (usec): min=5855, max=54493, avg=32456.81, stdev=2561.08 00:39:45.450 clat percentiles (usec): 00:39:45.450 | 1.00th=[16057], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:39:45.450 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:39:45.450 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:39:45.450 | 99.00th=[34341], 99.50th=[34341], 99.90th=[47973], 99.95th=[49021], 00:39:45.450 | 99.99th=[54264] 00:39:45.450 bw ( KiB/s): min= 1916, max= 2180, per=4.09%, avg=1963.90, stdev=75.57, samples=20 00:39:45.450 iops : min= 479, max= 545, avg=490.90, stdev=18.81, samples=20 00:39:45.450 lat (msec) : 10=0.32%, 20=0.81%, 50=98.82%, 100=0.04% 00:39:45.450 cpu : usr=98.98%, sys=0.69%, ctx=86, majf=0, minf=30 00:39:45.450 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:45.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.450 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.450 filename1: (groupid=0, jobs=1): err= 0: pid=98532: Fri Jul 12 01:58:10 2024 00:39:45.450 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10019msec) 00:39:45.450 slat (nsec): min=5588, max=78409, avg=18099.39, stdev=13260.48 00:39:45.450 clat (usec): min=14184, max=54740, avg=31856.55, stdev=3540.53 00:39:45.450 lat (usec): min=14201, max=54747, avg=31874.65, stdev=3542.18 00:39:45.450 clat percentiles (usec): 00:39:45.451 | 1.00th=[18744], 5.00th=[23200], 10.00th=[31327], 20.00th=[32113], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.451 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:39:45.451 | 99.00th=[41681], 99.50th=[47449], 99.90th=[54789], 99.95th=[54789], 00:39:45.451 | 99.99th=[54789] 00:39:45.451 bw ( KiB/s): min= 1920, max= 2352, per=4.16%, avg=1995.50, stdev=122.02, samples=20 00:39:45.451 iops : min= 480, max= 588, avg=498.80, stdev=30.40, samples=20 00:39:45.451 lat (msec) : 20=2.00%, 50=97.68%, 100=0.32% 00:39:45.451 cpu : usr=98.94%, sys=0.63%, ctx=132, majf=0, minf=39 00:39:45.451 IO depths : 1=5.1%, 2=10.7%, 4=23.0%, 8=53.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename1: (groupid=0, jobs=1): err= 0: pid=98533: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:39:45.451 slat (nsec): min=5633, max=93541, avg=24731.19, stdev=15307.74 00:39:45.451 clat (usec): min=13478, max=56200, avg=32547.95, stdev=1833.54 00:39:45.451 lat (usec): min=13500, max=56216, avg=32572.68, stdev=1832.97 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.451 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.451 | 99.00th=[33817], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361], 00:39:45.451 | 99.99th=[56361] 00:39:45.451 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1946.74, stdev=68.61, samples=19 00:39:45.451 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:39:45.451 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.451 cpu : usr=99.17%, sys=0.54%, ctx=14, majf=0, minf=24 00:39:45.451 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename1: (groupid=0, jobs=1): err= 0: pid=98534: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=487, BW=1952KiB/s (1998kB/s)(19.1MiB/10002msec) 00:39:45.451 slat (nsec): min=5589, max=87717, avg=20892.05, stdev=15805.62 00:39:45.451 clat (usec): min=13710, max=56936, avg=32634.60, stdev=2254.55 00:39:45.451 lat (usec): min=13716, max=56953, avg=32655.50, stdev=2254.41 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[25035], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.451 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:39:45.451 | 99.00th=[40109], 99.50th=[41681], 99.90th=[56886], 99.95th=[56886], 00:39:45.451 | 99.99th=[56886] 00:39:45.451 bw ( KiB/s): min= 1795, max= 2048, per=4.06%, avg=1946.68, stdev=68.33, samples=19 00:39:45.451 iops : min= 448, max= 512, avg=486.63, stdev=17.18, samples=19 00:39:45.451 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.451 cpu : usr=99.20%, sys=0.51%, ctx=13, majf=0, minf=31 00:39:45.451 IO depths : 1=4.2%, 2=10.5%, 4=24.9%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename1: (groupid=0, jobs=1): err= 0: pid=98535: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10016msec) 00:39:45.451 slat (nsec): min=5578, max=97224, avg=20398.39, stdev=16668.06 00:39:45.451 clat (usec): min=18503, max=53638, avg=32454.69, stdev=3628.95 00:39:45.451 lat (usec): min=18509, max=53668, avg=32475.09, stdev=3629.64 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[21103], 5.00th=[25035], 10.00th=[31589], 20.00th=[32113], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.451 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[38011], 00:39:45.451 | 99.00th=[46400], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:39:45.451 | 99.99th=[53740] 00:39:45.451 bw ( KiB/s): min= 1792, max= 2144, per=4.09%, avg=1960.60, stdev=79.09, samples=20 00:39:45.451 iops : min= 448, max= 536, avg=490.15, stdev=19.77, samples=20 00:39:45.451 lat (msec) : 20=0.20%, 50=99.23%, 100=0.57% 00:39:45.451 cpu : usr=99.15%, sys=0.56%, ctx=19, majf=0, minf=40 00:39:45.451 IO depths : 1=4.3%, 2=8.8%, 4=19.6%, 8=58.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename2: (groupid=0, jobs=1): err= 0: pid=98536: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:39:45.451 slat (nsec): min=5726, max=80589, avg=20141.52, stdev=12154.66 00:39:45.451 clat (usec): min=14779, max=35051, avg=32514.51, stdev=1306.81 00:39:45.451 lat (usec): min=14789, max=35060, avg=32534.65, stdev=1306.68 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.451 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.451 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:39:45.451 | 99.99th=[34866] 00:39:45.451 bw ( KiB/s): min= 1916, max= 2048, per=4.09%, avg=1959.95, stdev=60.89, samples=19 00:39:45.451 iops : min= 479, max= 512, avg=489.95, stdev=15.17, samples=19 00:39:45.451 lat (msec) : 20=0.33%, 50=99.67% 00:39:45.451 cpu : usr=99.08%, sys=0.63%, ctx=30, majf=0, minf=31 00:39:45.451 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename2: (groupid=0, jobs=1): err= 0: pid=98537: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=735, BW=2943KiB/s (3013kB/s)(28.8MiB/10019msec) 00:39:45.451 slat (nsec): min=5575, max=85376, avg=7205.57, stdev=2865.82 00:39:45.451 clat (usec): min=3269, max=33100, avg=21695.25, stdev=2901.69 00:39:45.451 lat (usec): min=3296, max=33107, avg=21702.45, stdev=2901.02 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[13435], 5.00th=[17695], 10.00th=[17695], 20.00th=[19006], 00:39:45.451 | 30.00th=[20317], 40.00th=[21627], 50.00th=[22414], 60.00th=[22938], 00:39:45.451 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:45.451 | 99.00th=[25297], 99.50th=[26084], 99.90th=[33162], 99.95th=[33162], 00:39:45.451 | 99.99th=[33162] 00:39:45.451 bw ( KiB/s): min= 2858, max= 3265, per=6.14%, avg=2944.15, stdev=83.66, samples=20 00:39:45.451 iops : min= 714, max= 816, avg=735.90, stdev=20.93, samples=20 00:39:45.451 lat (msec) : 4=0.19%, 10=0.64%, 20=26.83%, 50=72.34% 00:39:45.451 cpu : usr=98.50%, sys=0.88%, ctx=45, majf=0, minf=75 00:39:45.451 IO depths : 1=0.1%, 2=0.1%, 4=6.2%, 8=81.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=88.9%, 8=5.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=7371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename2: (groupid=0, jobs=1): err= 0: pid=98538: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10012msec) 00:39:45.451 slat (nsec): min=5597, max=77831, avg=18430.80, stdev=12731.79 00:39:45.451 clat (usec): min=14241, max=55934, avg=32255.19, stdev=2862.88 00:39:45.451 lat (usec): min=14248, max=55952, avg=32273.62, stdev=2864.25 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[19530], 5.00th=[27919], 10.00th=[32113], 20.00th=[32113], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.451 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.451 | 99.00th=[41681], 99.50th=[46924], 99.90th=[55837], 99.95th=[55837], 00:39:45.451 | 99.99th=[55837] 00:39:45.451 bw ( KiB/s): min= 1916, max= 2224, per=4.11%, avg=1972.00, stdev=87.42, samples=19 00:39:45.451 iops : min= 479, max= 556, avg=493.00, stdev=21.86, samples=19 00:39:45.451 lat (msec) : 20=1.30%, 50=98.50%, 100=0.20% 00:39:45.451 cpu : usr=97.07%, sys=1.61%, ctx=119, majf=0, minf=38 00:39:45.451 IO depths : 1=4.9%, 2=10.7%, 4=23.8%, 8=52.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.451 filename2: (groupid=0, jobs=1): err= 0: pid=98539: Fri Jul 12 01:58:10 2024 00:39:45.451 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:39:45.451 slat (nsec): min=5586, max=99488, avg=17576.52, stdev=15544.28 00:39:45.451 clat (usec): min=13754, max=58141, avg=32655.09, stdev=1912.16 00:39:45.451 lat (usec): min=13761, max=58157, avg=32672.67, stdev=1910.86 00:39:45.451 clat percentiles (usec): 00:39:45.451 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:39:45.451 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.451 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.451 | 99.00th=[33817], 99.50th=[34341], 99.90th=[57934], 99.95th=[57934], 00:39:45.451 | 99.99th=[57934] 00:39:45.451 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1946.53, stdev=68.70, samples=19 00:39:45.451 iops : min= 448, max= 512, avg=486.63, stdev=17.18, samples=19 00:39:45.451 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.451 cpu : usr=99.08%, sys=0.59%, ctx=42, majf=0, minf=32 00:39:45.451 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.451 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.452 filename2: (groupid=0, jobs=1): err= 0: pid=98540: Fri Jul 12 01:58:10 2024 00:39:45.452 read: IOPS=487, BW=1952KiB/s (1998kB/s)(19.1MiB/10002msec) 00:39:45.452 slat (nsec): min=5892, max=98102, avg=25423.97, stdev=13961.13 00:39:45.452 clat (usec): min=13579, max=56725, avg=32566.57, stdev=1852.84 00:39:45.452 lat (usec): min=13585, max=56742, avg=32592.00, stdev=1851.85 00:39:45.452 clat percentiles (usec): 00:39:45.452 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:39:45.452 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.452 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.452 | 99.00th=[33817], 99.50th=[34341], 99.90th=[56886], 99.95th=[56886], 00:39:45.452 | 99.99th=[56886] 00:39:45.452 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1946.74, stdev=68.61, samples=19 00:39:45.452 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:39:45.452 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.452 cpu : usr=99.13%, sys=0.58%, ctx=14, majf=0, minf=43 00:39:45.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.452 filename2: (groupid=0, jobs=1): err= 0: pid=98541: Fri Jul 12 01:58:10 2024 00:39:45.452 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10017msec) 00:39:45.452 slat (nsec): min=5589, max=97519, avg=13978.69, stdev=9262.08 00:39:45.452 clat (usec): min=15824, max=51886, avg=32606.97, stdev=1899.47 00:39:45.452 lat (usec): min=15830, max=51910, avg=32620.95, stdev=1899.90 00:39:45.452 clat percentiles (usec): 00:39:45.452 | 1.00th=[21103], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:39:45.452 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:39:45.452 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.452 | 99.00th=[34341], 99.50th=[34866], 99.90th=[51643], 99.95th=[51643], 00:39:45.452 | 99.99th=[51643] 00:39:45.452 bw ( KiB/s): min= 1792, max= 2048, per=4.07%, avg=1951.80, stdev=70.52, samples=20 00:39:45.452 iops : min= 448, max= 512, avg=487.95, stdev=17.63, samples=20 00:39:45.452 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:39:45.452 cpu : usr=99.02%, sys=0.63%, ctx=71, majf=0, minf=40 00:39:45.452 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.452 filename2: (groupid=0, jobs=1): err= 0: pid=98542: Fri Jul 12 01:58:10 2024 00:39:45.452 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:39:45.452 slat (nsec): min=5628, max=94878, avg=27556.75, stdev=16503.50 00:39:45.452 clat (usec): min=13457, max=57776, avg=32558.79, stdev=1904.46 00:39:45.452 lat (usec): min=13479, max=57792, avg=32586.35, stdev=1902.96 00:39:45.452 clat percentiles (usec): 00:39:45.452 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:39:45.452 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:39:45.452 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:39:45.452 | 99.00th=[33817], 99.50th=[34341], 99.90th=[57934], 99.95th=[57934], 00:39:45.452 | 99.99th=[57934] 00:39:45.452 bw ( KiB/s): min= 1795, max= 2048, per=4.06%, avg=1946.68, stdev=68.33, samples=19 00:39:45.452 iops : min= 448, max= 512, avg=486.63, stdev=17.18, samples=19 00:39:45.452 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:39:45.452 cpu : usr=98.42%, sys=0.86%, ctx=115, majf=0, minf=32 00:39:45.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.452 filename2: (groupid=0, jobs=1): err= 0: pid=98543: Fri Jul 12 01:58:10 2024 00:39:45.452 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10015msec) 00:39:45.452 slat (nsec): min=5511, max=74197, avg=16942.73, stdev=11824.84 00:39:45.452 clat (usec): min=14759, max=56662, avg=32562.92, stdev=3104.19 00:39:45.452 lat (usec): min=14767, max=56711, avg=32579.86, stdev=3104.59 00:39:45.452 clat percentiles (usec): 00:39:45.452 | 1.00th=[22414], 5.00th=[28181], 10.00th=[32113], 20.00th=[32375], 00:39:45.452 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:39:45.452 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:39:45.452 | 99.00th=[46924], 99.50th=[54789], 99.90th=[56361], 99.95th=[56886], 00:39:45.452 | 99.99th=[56886] 00:39:45.452 bw ( KiB/s): min= 1795, max= 2048, per=4.07%, avg=1953.55, stdev=58.58, samples=20 00:39:45.452 iops : min= 448, max= 512, avg=488.35, stdev=14.75, samples=20 00:39:45.452 lat (msec) : 20=0.33%, 50=99.14%, 100=0.53% 00:39:45.452 cpu : usr=98.19%, sys=1.10%, ctx=47, majf=0, minf=28 00:39:45.452 IO depths : 1=3.1%, 2=8.8%, 4=23.4%, 8=55.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:39:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.452 issued rwts: total=4900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:45.452 00:39:45.452 Run status group 0 (all jobs): 00:39:45.452 READ: bw=46.8MiB/s (49.1MB/s), 1944KiB/s-2943KiB/s (1990kB/s-3013kB/s), io=470MiB (493MB), run=10001-10042msec 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 bdev_null0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:45.452 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.453 [2024-07-12 01:58:10.303875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.453 bdev_null1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:45.453 { 00:39:45.453 "params": { 00:39:45.453 "name": "Nvme$subsystem", 00:39:45.453 "trtype": "$TEST_TRANSPORT", 00:39:45.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:45.453 "adrfam": "ipv4", 00:39:45.453 "trsvcid": "$NVMF_PORT", 00:39:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:45.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:45.453 "hdgst": ${hdgst:-false}, 00:39:45.453 "ddgst": ${ddgst:-false} 00:39:45.453 }, 00:39:45.453 "method": "bdev_nvme_attach_controller" 00:39:45.453 } 00:39:45.453 EOF 00:39:45.453 )") 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:45.453 { 00:39:45.453 "params": { 00:39:45.453 "name": "Nvme$subsystem", 00:39:45.453 "trtype": "$TEST_TRANSPORT", 00:39:45.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:45.453 "adrfam": "ipv4", 00:39:45.453 "trsvcid": "$NVMF_PORT", 00:39:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:45.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:45.453 "hdgst": ${hdgst:-false}, 00:39:45.453 "ddgst": ${ddgst:-false} 00:39:45.453 }, 00:39:45.453 "method": "bdev_nvme_attach_controller" 00:39:45.453 } 00:39:45.453 EOF 00:39:45.453 )") 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:45.453 "params": { 00:39:45.453 "name": "Nvme0", 00:39:45.453 "trtype": "tcp", 00:39:45.453 "traddr": "10.0.0.2", 00:39:45.453 "adrfam": "ipv4", 00:39:45.453 "trsvcid": "4420", 00:39:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:45.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:45.453 "hdgst": false, 00:39:45.453 "ddgst": false 00:39:45.453 }, 00:39:45.453 "method": "bdev_nvme_attach_controller" 00:39:45.453 },{ 00:39:45.453 "params": { 00:39:45.453 "name": "Nvme1", 00:39:45.453 "trtype": "tcp", 00:39:45.453 "traddr": "10.0.0.2", 00:39:45.453 "adrfam": "ipv4", 00:39:45.453 "trsvcid": "4420", 00:39:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:45.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:45.453 "hdgst": false, 00:39:45.453 "ddgst": false 00:39:45.453 }, 00:39:45.453 "method": "bdev_nvme_attach_controller" 00:39:45.453 }' 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:45.453 01:58:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:45.453 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:45.453 ... 00:39:45.453 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:45.453 ... 00:39:45.453 fio-3.35 00:39:45.453 Starting 4 threads 00:39:45.453 EAL: No free 2048 kB hugepages reported on node 1 00:39:50.779 00:39:50.779 filename0: (groupid=0, jobs=1): err= 0: pid=100733: Fri Jul 12 01:58:16 2024 00:39:50.779 read: IOPS=2505, BW=19.6MiB/s (20.5MB/s)(97.9MiB/5002msec) 00:39:50.779 slat (usec): min=5, max=109, avg= 6.17, stdev= 2.50 00:39:50.779 clat (usec): min=1372, max=5671, avg=3173.30, stdev=499.01 00:39:50.779 lat (usec): min=1396, max=5677, avg=3179.46, stdev=498.82 00:39:50.779 clat percentiles (usec): 00:39:50.779 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2802], 00:39:50.779 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3195], 00:39:50.779 | 70.00th=[ 3392], 80.00th=[ 3523], 90.00th=[ 3752], 95.00th=[ 4080], 00:39:50.779 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5145], 99.95th=[ 5407], 00:39:50.779 | 99.99th=[ 5669] 00:39:50.779 bw ( KiB/s): min=18768, max=21872, per=29.68%, avg=20103.11, stdev=882.58, samples=9 00:39:50.779 iops : min= 2346, max= 2734, avg=2512.89, stdev=110.32, samples=9 00:39:50.779 lat (msec) : 2=0.73%, 4=93.79%, 10=5.49% 00:39:50.779 cpu : usr=91.28%, sys=4.82%, ctx=186, majf=0, minf=0 00:39:50.779 IO depths : 1=0.1%, 2=10.2%, 4=60.5%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.779 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.779 issued rwts: total=12535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.779 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:50.779 filename0: (groupid=0, jobs=1): err= 0: pid=100734: Fri Jul 12 01:58:16 2024 00:39:50.779 read: IOPS=1999, BW=15.6MiB/s (16.4MB/s)(78.1MiB/5002msec) 00:39:50.779 slat (nsec): min=5409, max=69049, avg=5957.30, stdev=1743.76 00:39:50.779 clat (usec): min=1537, max=6617, avg=3985.48, stdev=764.78 00:39:50.779 lat (usec): min=1543, max=6622, avg=3991.44, stdev=764.65 00:39:50.779 clat percentiles (usec): 00:39:50.780 | 1.00th=[ 2769], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3425], 00:39:50.780 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3884], 00:39:50.780 | 70.00th=[ 4113], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5735], 00:39:50.780 | 99.00th=[ 5997], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6456], 00:39:50.780 | 99.99th=[ 6587] 00:39:50.780 bw ( KiB/s): min=15328, max=16464, per=23.51%, avg=15925.33, stdev=353.45, samples=9 00:39:50.780 iops : min= 1916, max= 2058, avg=1990.67, stdev=44.18, samples=9 00:39:50.780 lat (msec) : 2=0.09%, 4=66.10%, 10=33.81% 00:39:50.780 cpu : usr=97.02%, sys=2.76%, ctx=11, majf=0, minf=9 00:39:50.780 IO depths : 1=0.2%, 2=0.7%, 4=71.3%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.780 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.780 issued rwts: total=9999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.780 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:50.780 filename1: (groupid=0, jobs=1): err= 0: pid=100735: Fri Jul 12 01:58:16 2024 00:39:50.780 read: IOPS=1981, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5002msec) 00:39:50.780 slat (nsec): min=5395, max=67869, avg=5994.90, stdev=1897.67 00:39:50.780 clat (usec): min=1564, max=7825, avg=4020.55, stdev=744.56 00:39:50.780 lat (usec): min=1570, max=7831, avg=4026.54, stdev=744.44 00:39:50.780 clat percentiles (usec): 00:39:50.780 | 1.00th=[ 3064], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3490], 00:39:50.780 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3884], 00:39:50.780 | 70.00th=[ 4113], 80.00th=[ 4490], 90.00th=[ 5342], 95.00th=[ 5735], 00:39:50.780 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6587], 99.95th=[ 6915], 00:39:50.780 | 99.99th=[ 7832] 00:39:50.780 bw ( KiB/s): min=15392, max=16304, per=23.37%, avg=15827.44, stdev=359.12, samples=9 00:39:50.780 iops : min= 1924, max= 2038, avg=1978.33, stdev=44.92, samples=9 00:39:50.780 lat (msec) : 2=0.11%, 4=64.81%, 10=35.08% 00:39:50.780 cpu : usr=97.36%, sys=2.40%, ctx=11, majf=0, minf=9 00:39:50.780 IO depths : 1=0.2%, 2=0.4%, 4=71.5%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.780 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.780 issued rwts: total=9913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.780 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:50.780 filename1: (groupid=0, jobs=1): err= 0: pid=100736: Fri Jul 12 01:58:16 2024 00:39:50.780 read: IOPS=1981, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5003msec) 00:39:50.780 slat (nsec): min=5396, max=40243, avg=6068.21, stdev=1599.15 00:39:50.780 clat (usec): min=1686, max=8286, avg=4020.80, stdev=746.23 00:39:50.780 lat (usec): min=1704, max=8291, avg=4026.87, stdev=746.15 00:39:50.780 clat percentiles (usec): 00:39:50.780 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3490], 00:39:50.780 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3916], 00:39:50.780 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 5407], 95.00th=[ 5735], 00:39:50.780 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6587], 99.95th=[ 6718], 00:39:50.780 | 99.99th=[ 8291] 00:39:50.780 bw ( KiB/s): min=15184, max=16320, per=23.42%, avg=15866.67, stdev=331.88, samples=9 00:39:50.780 iops : min= 1898, max= 2040, avg=1983.33, stdev=41.48, samples=9 00:39:50.780 lat (msec) : 2=0.06%, 4=64.40%, 10=35.54% 00:39:50.780 cpu : usr=97.36%, sys=2.40%, ctx=8, majf=0, minf=9 00:39:50.780 IO depths : 1=0.2%, 2=0.4%, 4=71.3%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.780 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.780 issued rwts: total=9912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.780 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:50.780 00:39:50.780 Run status group 0 (all jobs): 00:39:50.780 READ: bw=66.1MiB/s (69.4MB/s), 15.5MiB/s-19.6MiB/s (16.2MB/s-20.5MB/s), io=331MiB (347MB), run=5002-5003msec 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 00:39:50.780 real 0m24.322s 00:39:50.780 user 5m13.460s 00:39:50.780 sys 0m4.187s 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 ************************************ 00:39:50.780 END TEST fio_dif_rand_params 00:39:50.780 ************************************ 00:39:50.780 01:58:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:50.780 01:58:16 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:50.780 01:58:16 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 ************************************ 00:39:50.780 START TEST fio_dif_digest 00:39:50.780 ************************************ 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 bdev_null0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:50.780 [2024-07-12 01:58:16.800505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:50.780 { 00:39:50.780 "params": { 00:39:50.780 "name": "Nvme$subsystem", 00:39:50.780 "trtype": "$TEST_TRANSPORT", 00:39:50.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:50.780 "adrfam": "ipv4", 00:39:50.780 "trsvcid": "$NVMF_PORT", 00:39:50.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:50.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:50.780 "hdgst": ${hdgst:-false}, 00:39:50.780 "ddgst": ${ddgst:-false} 00:39:50.780 }, 00:39:50.780 "method": "bdev_nvme_attach_controller" 00:39:50.780 } 00:39:50.780 EOF 00:39:50.780 )") 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:50.780 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:50.781 "params": { 00:39:50.781 "name": "Nvme0", 00:39:50.781 "trtype": "tcp", 00:39:50.781 "traddr": "10.0.0.2", 00:39:50.781 "adrfam": "ipv4", 00:39:50.781 "trsvcid": "4420", 00:39:50.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:50.781 "hdgst": true, 00:39:50.781 "ddgst": true 00:39:50.781 }, 00:39:50.781 "method": "bdev_nvme_attach_controller" 00:39:50.781 }' 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:50.781 01:58:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:51.045 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:51.045 ... 00:39:51.045 fio-3.35 00:39:51.045 Starting 3 threads 00:39:51.045 EAL: No free 2048 kB hugepages reported on node 1 00:40:03.341 00:40:03.341 filename0: (groupid=0, jobs=1): err= 0: pid=102224: Fri Jul 12 01:58:27 2024 00:40:03.341 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(286MiB/10049msec) 00:40:03.341 slat (nsec): min=5776, max=61415, avg=6446.36, stdev=1549.89 00:40:03.341 clat (usec): min=6692, max=56221, avg=13160.02, stdev=2980.17 00:40:03.341 lat (usec): min=6698, max=56227, avg=13166.47, stdev=2980.20 00:40:03.341 clat percentiles (usec): 00:40:03.341 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11731], 00:40:03.341 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:40:03.341 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15401], 00:40:03.341 | 99.00th=[16319], 99.50th=[16909], 99.90th=[55313], 99.95th=[55837], 00:40:03.341 | 99.99th=[56361] 00:40:03.341 bw ( KiB/s): min=27648, max=32000, per=35.83%, avg=29235.20, stdev=1059.10, samples=20 00:40:03.341 iops : min= 216, max= 250, avg=228.40, stdev= 8.27, samples=20 00:40:03.341 lat (msec) : 10=7.87%, 20=91.78%, 50=0.04%, 100=0.31% 00:40:03.341 cpu : usr=95.85%, sys=3.93%, ctx=27, majf=0, minf=121 00:40:03.341 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.341 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.341 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:03.341 filename0: (groupid=0, jobs=1): err= 0: pid=102225: Fri Jul 12 01:58:27 2024 00:40:03.341 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10045msec) 00:40:03.341 slat (nsec): min=5776, max=63961, avg=6425.78, stdev=1505.62 00:40:03.341 clat (usec): min=7840, max=92814, avg=13230.57, stdev=4766.24 00:40:03.341 lat (usec): min=7846, max=92820, avg=13236.99, stdev=4766.25 00:40:03.341 clat percentiles (usec): 00:40:03.341 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11731], 00:40:03.341 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:40:03.341 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:40:03.341 | 99.00th=[52691], 99.50th=[53740], 99.90th=[55837], 99.95th=[56886], 00:40:03.341 | 99.99th=[92799] 00:40:03.341 bw ( KiB/s): min=25856, max=31232, per=35.63%, avg=29068.80, stdev=1330.15, samples=20 00:40:03.341 iops : min= 202, max= 244, avg=227.10, stdev=10.39, samples=20 00:40:03.341 lat (msec) : 10=6.64%, 20=92.26%, 50=0.04%, 100=1.06% 00:40:03.341 cpu : usr=96.48%, sys=3.27%, ctx=21, majf=0, minf=143 00:40:03.341 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.341 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.341 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:03.341 filename0: (groupid=0, jobs=1): err= 0: pid=102227: Fri Jul 12 01:58:27 2024 00:40:03.341 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(231MiB/10045msec) 00:40:03.341 slat (nsec): min=5807, max=31754, avg=6477.72, stdev=870.52 00:40:03.341 clat (usec): min=8542, max=58458, avg=16292.88, stdev=8892.35 00:40:03.341 lat (usec): min=8548, max=58465, avg=16299.36, stdev=8892.36 00:40:03.341 clat percentiles (usec): 00:40:03.341 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12780], 20.00th=[13435], 00:40:03.341 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:40:03.341 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16450], 95.00th=[19530], 00:40:03.341 | 99.00th=[55837], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:40:03.341 | 99.99th=[58459] 00:40:03.341 bw ( KiB/s): min=20480, max=27648, per=28.93%, avg=23603.20, stdev=1501.02, samples=20 00:40:03.341 iops : min= 160, max= 216, avg=184.40, stdev=11.73, samples=20 00:40:03.341 lat (msec) : 10=1.57%, 20=93.45%, 50=0.22%, 100=4.77% 00:40:03.341 cpu : usr=97.08%, sys=2.65%, ctx=20, majf=0, minf=150 00:40:03.341 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.341 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.341 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:03.341 00:40:03.341 Run status group 0 (all jobs): 00:40:03.341 READ: bw=79.7MiB/s (83.5MB/s), 23.0MiB/s-28.4MiB/s (24.1MB/s-29.8MB/s), io=801MiB (840MB), run=10045-10049msec 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.341 00:40:03.341 real 0m11.073s 00:40:03.341 user 0m42.836s 00:40:03.341 sys 0m1.331s 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:03.341 01:58:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:03.341 ************************************ 00:40:03.341 END TEST fio_dif_digest 00:40:03.341 ************************************ 00:40:03.341 01:58:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:03.341 01:58:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:03.341 rmmod nvme_tcp 00:40:03.341 rmmod nvme_fabrics 00:40:03.341 rmmod nvme_keyring 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 91967 ']' 00:40:03.341 01:58:27 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 91967 00:40:03.341 01:58:27 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 91967 ']' 00:40:03.341 01:58:27 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 91967 00:40:03.341 01:58:27 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:40:03.341 01:58:27 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:03.341 01:58:27 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91967 00:40:03.341 01:58:28 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:03.341 01:58:28 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:03.341 01:58:28 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91967' 00:40:03.341 killing process with pid 91967 00:40:03.341 01:58:28 nvmf_dif -- common/autotest_common.sh@965 -- # kill 91967 00:40:03.341 01:58:28 nvmf_dif -- common/autotest_common.sh@970 -- # wait 91967 00:40:03.341 01:58:28 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:03.341 01:58:28 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:05.253 Waiting for block devices as requested 00:40:05.253 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:05.253 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:05.253 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:05.513 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:05.513 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:05.513 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:05.513 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:05.773 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:05.773 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:06.033 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:06.033 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:06.033 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:06.033 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:06.294 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:06.294 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:06.294 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:06.294 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:06.294 01:58:32 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:06.294 01:58:32 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:06.294 01:58:32 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:06.294 01:58:32 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:06.294 01:58:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.294 01:58:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:06.294 01:58:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.834 01:58:34 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:08.834 00:40:08.834 real 1m17.264s 00:40:08.834 user 8m0.200s 00:40:08.834 sys 0m19.648s 00:40:08.834 01:58:34 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:08.834 01:58:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:08.834 ************************************ 00:40:08.834 END TEST nvmf_dif 00:40:08.834 ************************************ 00:40:08.834 01:58:34 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:08.834 01:58:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:08.834 01:58:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:08.834 01:58:34 -- common/autotest_common.sh@10 -- # set +x 00:40:08.834 ************************************ 00:40:08.834 START TEST nvmf_abort_qd_sizes 00:40:08.834 ************************************ 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:08.834 * Looking for test storage... 00:40:08.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:40:08.834 01:58:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:16.969 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:16.969 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:16.969 Found net devices under 0000:31:00.0: cvl_0_0 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:16.969 Found net devices under 0000:31:00.1: cvl_0_1 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:16.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:16.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:40:16.969 00:40:16.969 --- 10.0.0.2 ping statistics --- 00:40:16.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.969 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:16.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:16.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:40:16.969 00:40:16.969 --- 10.0.0.1 ping statistics --- 00:40:16.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.969 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:16.969 01:58:42 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:20.268 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:20.268 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=112269 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 112269 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 112269 ']' 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:20.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:20.529 01:58:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:20.529 [2024-07-12 01:58:46.743645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:20.529 [2024-07-12 01:58:46.743689] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.529 EAL: No free 2048 kB hugepages reported on node 1 00:40:20.529 [2024-07-12 01:58:46.815437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:20.529 [2024-07-12 01:58:46.848977] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:20.529 [2024-07-12 01:58:46.849014] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:20.529 [2024-07-12 01:58:46.849022] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:20.529 [2024-07-12 01:58:46.849032] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:20.529 [2024-07-12 01:58:46.849037] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:20.529 [2024-07-12 01:58:46.849174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.529 [2024-07-12 01:58:46.849274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:20.529 [2024-07-12 01:58:46.849386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.529 [2024-07-12 01:58:46.849387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:21.470 01:58:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:21.470 ************************************ 00:40:21.470 START TEST spdk_target_abort 00:40:21.470 ************************************ 00:40:21.470 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:40:21.470 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:21.471 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:40:21.471 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.471 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:21.731 spdk_targetn1 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:21.731 [2024-07-12 01:58:47.916301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:21.731 [2024-07-12 01:58:47.956579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:21.731 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:21.732 01:58:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:21.732 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.992 [2024-07-12 01:58:48.125553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:392 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:21.992 [2024-07-12 01:58:48.125577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0032 p:1 m:0 dnr:0 00:40:21.992 [2024-07-12 01:58:48.132324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:544 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:40:21.992 [2024-07-12 01:58:48.132339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0047 p:1 m:0 dnr:0 00:40:21.992 [2024-07-12 01:58:48.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:568 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.132537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:40:21.993 [2024-07-12 01:58:48.155695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1328 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.155712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a8 p:1 m:0 dnr:0 00:40:21.993 [2024-07-12 01:58:48.187719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2456 len:8 PRP1 0x2000078be000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.187736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:21.993 [2024-07-12 01:58:48.205319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3136 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.205335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:008a p:0 m:0 dnr:0 00:40:21.993 [2024-07-12 01:58:48.206313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3208 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.206325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0093 p:0 m:0 dnr:0 00:40:21.993 [2024-07-12 01:58:48.211748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3312 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.211761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a2 p:0 m:0 dnr:0 00:40:21.993 [2024-07-12 01:58:48.235959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4072 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:40:21.993 [2024-07-12 01:58:48.235976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ff p:0 m:0 dnr:0 00:40:25.294 Initializing NVMe Controllers 00:40:25.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:25.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:25.294 Initialization complete. Launching workers. 00:40:25.294 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12389, failed: 9 00:40:25.294 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3056, failed to submit 9342 00:40:25.294 success 821, unsuccess 2235, failed 0 00:40:25.294 01:58:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:25.294 01:58:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:25.294 EAL: No free 2048 kB hugepages reported on node 1 00:40:25.294 [2024-07-12 01:58:51.441551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:512 len:8 PRP1 0x200007c54000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.441588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.473396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:1168 len:8 PRP1 0x200007c40000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.473421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.489533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1672 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.489555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00d2 p:1 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.505350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2024 len:8 PRP1 0x200007c58000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.505373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.529315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2552 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.529337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.537293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2720 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.537313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.545344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2896 len:8 PRP1 0x200007c48000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.545366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:25.294 [2024-07-12 01:58:51.568173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:3456 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:40:25.294 [2024-07-12 01:58:51.568195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00b4 p:0 m:0 dnr:0 00:40:28.597 Initializing NVMe Controllers 00:40:28.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:28.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:28.597 Initialization complete. Launching workers. 00:40:28.597 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8616, failed: 8 00:40:28.597 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 7404 00:40:28.597 success 360, unsuccess 860, failed 0 00:40:28.597 01:58:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:28.597 01:58:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:28.597 EAL: No free 2048 kB hugepages reported on node 1 00:40:28.597 [2024-07-12 01:58:54.735148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1936 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:28.597 [2024-07-12 01:58:54.735175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:30.512 [2024-07-12 01:58:56.630843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:214504 len:8 PRP1 0x2000078ca000 PRP2 0x0 00:40:30.512 [2024-07-12 01:58:56.630869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:30.512 [2024-07-12 01:58:56.802937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:233976 len:8 PRP1 0x2000078ee000 PRP2 0x0 00:40:30.512 [2024-07-12 01:58:56.802961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b8 p:1 m:0 dnr:0 00:40:31.454 Initializing NVMe Controllers 00:40:31.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:31.454 Initialization complete. Launching workers. 00:40:31.454 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42232, failed: 3 00:40:31.454 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2723, failed to submit 39512 00:40:31.454 success 594, unsuccess 2129, failed 0 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.454 01:58:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 112269 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 112269 ']' 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 112269 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112269 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112269' 00:40:33.363 killing process with pid 112269 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 112269 00:40:33.363 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 112269 00:40:33.623 00:40:33.623 real 0m12.186s 00:40:33.623 user 0m49.833s 00:40:33.623 sys 0m1.674s 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.623 ************************************ 00:40:33.623 END TEST spdk_target_abort 00:40:33.623 ************************************ 00:40:33.623 01:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:33.623 01:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:33.623 01:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:33.623 01:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:33.623 ************************************ 00:40:33.623 START TEST kernel_target_abort 00:40:33.623 ************************************ 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:33.623 01:58:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:37.822 Waiting for block devices as requested 00:40:37.822 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:37.822 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:38.084 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:38.084 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:38.344 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:38.344 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:38.344 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:38.344 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:38.605 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:38.605 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:38.605 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:38.605 No valid GPT data, bailing 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:38.605 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:40:38.866 00:40:38.866 Discovery Log Number of Records 2, Generation counter 2 00:40:38.866 =====Discovery Log Entry 0====== 00:40:38.866 trtype: tcp 00:40:38.866 adrfam: ipv4 00:40:38.866 subtype: current discovery subsystem 00:40:38.866 treq: not specified, sq flow control disable supported 00:40:38.866 portid: 1 00:40:38.866 trsvcid: 4420 00:40:38.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:38.866 traddr: 10.0.0.1 00:40:38.866 eflags: none 00:40:38.866 sectype: none 00:40:38.866 =====Discovery Log Entry 1====== 00:40:38.866 trtype: tcp 00:40:38.866 adrfam: ipv4 00:40:38.866 subtype: nvme subsystem 00:40:38.866 treq: not specified, sq flow control disable supported 00:40:38.866 portid: 1 00:40:38.866 trsvcid: 4420 00:40:38.866 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:38.866 traddr: 10.0.0.1 00:40:38.866 eflags: none 00:40:38.866 sectype: none 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:38.866 01:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:38.866 01:59:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:38.866 EAL: No free 2048 kB hugepages reported on node 1 00:40:42.168 Initializing NVMe Controllers 00:40:42.168 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:42.168 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:42.168 Initialization complete. Launching workers. 00:40:42.168 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60884, failed: 0 00:40:42.168 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 60884, failed to submit 0 00:40:42.168 success 0, unsuccess 60884, failed 0 00:40:42.168 01:59:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:42.168 01:59:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:42.168 EAL: No free 2048 kB hugepages reported on node 1 00:40:45.515 Initializing NVMe Controllers 00:40:45.515 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:45.515 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:45.515 Initialization complete. Launching workers. 00:40:45.515 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103097, failed: 0 00:40:45.515 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26006, failed to submit 77091 00:40:45.515 success 0, unsuccess 26006, failed 0 00:40:45.515 01:59:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:45.515 01:59:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:45.515 EAL: No free 2048 kB hugepages reported on node 1 00:40:48.106 Initializing NVMe Controllers 00:40:48.106 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:48.106 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:48.106 Initialization complete. Launching workers. 00:40:48.106 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97765, failed: 0 00:40:48.106 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24430, failed to submit 73335 00:40:48.106 success 0, unsuccess 24430, failed 0 00:40:48.106 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:48.107 01:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:52.310 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:52.310 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:53.693 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:53.693 00:40:53.693 real 0m20.081s 00:40:53.693 user 0m9.444s 00:40:53.693 sys 0m6.151s 00:40:53.693 01:59:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:53.693 01:59:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:53.693 ************************************ 00:40:53.693 END TEST kernel_target_abort 00:40:53.693 ************************************ 00:40:53.693 01:59:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:53.693 01:59:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:53.693 01:59:19 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:53.693 01:59:19 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:40:53.693 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:53.693 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:40:53.693 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:53.693 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:53.693 rmmod nvme_tcp 00:40:53.693 rmmod nvme_fabrics 00:40:53.954 rmmod nvme_keyring 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 112269 ']' 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 112269 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 112269 ']' 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 112269 00:40:53.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (112269) - No such process 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 112269 is not found' 00:40:53.954 Process with pid 112269 is not found 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:53.954 01:59:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:58.158 Waiting for block devices as requested 00:40:58.158 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:58.158 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:58.418 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:58.418 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:58.418 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:58.678 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:58.678 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:58.678 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:58.678 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:58.938 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:58.938 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:58.938 01:59:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.479 01:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:01.479 00:41:01.479 real 0m52.477s 00:41:01.479 user 1m4.822s 00:41:01.479 sys 0m19.124s 00:41:01.479 01:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:01.479 01:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:01.479 ************************************ 00:41:01.479 END TEST nvmf_abort_qd_sizes 00:41:01.479 ************************************ 00:41:01.479 01:59:27 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:01.479 01:59:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:41:01.479 01:59:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:01.479 01:59:27 -- common/autotest_common.sh@10 -- # set +x 00:41:01.479 ************************************ 00:41:01.479 START TEST keyring_file 00:41:01.479 ************************************ 00:41:01.479 01:59:27 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:01.479 * Looking for test storage... 00:41:01.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:01.479 01:59:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:01.479 01:59:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:01.479 01:59:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:01.479 01:59:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.479 01:59:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.479 01:59:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:01.480 01:59:27 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.480 01:59:27 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.480 01:59:27 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.480 01:59:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.480 01:59:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.480 01:59:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.480 01:59:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:01.480 01:59:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@47 -- # : 0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ARKixl2HcP 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ARKixl2HcP 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ARKixl2HcP 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ARKixl2HcP 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y2f4E6nLuI 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:01.480 01:59:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y2f4E6nLuI 00:41:01.480 01:59:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y2f4E6nLuI 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Y2f4E6nLuI 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=122637 00:41:01.480 01:59:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 122637 00:41:01.480 01:59:27 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 122637 ']' 00:41:01.480 01:59:27 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.480 01:59:27 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:01.480 01:59:27 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.480 01:59:27 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:01.480 01:59:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:01.480 [2024-07-12 01:59:27.609955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:41:01.480 [2024-07-12 01:59:27.610010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122637 ] 00:41:01.480 EAL: No free 2048 kB hugepages reported on node 1 00:41:01.480 [2024-07-12 01:59:27.679179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.480 [2024-07-12 01:59:27.710488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.049 01:59:28 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:02.049 01:59:28 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:41:02.049 01:59:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:02.049 01:59:28 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.049 01:59:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:02.049 [2024-07-12 01:59:28.376256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:02.049 null0 00:41:02.309 [2024-07-12 01:59:28.408304] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:02.309 [2024-07-12 01:59:28.408565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:02.309 [2024-07-12 01:59:28.416318] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:41:02.309 01:59:28 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.309 01:59:28 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:02.309 01:59:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:02.309 01:59:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:02.309 01:59:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:41:02.309 01:59:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:02.310 [2024-07-12 01:59:28.432357] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:02.310 request: 00:41:02.310 { 00:41:02.310 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.310 "secure_channel": false, 00:41:02.310 "listen_address": { 00:41:02.310 "trtype": "tcp", 00:41:02.310 "traddr": "127.0.0.1", 00:41:02.310 "trsvcid": "4420" 00:41:02.310 }, 00:41:02.310 "method": "nvmf_subsystem_add_listener", 00:41:02.310 "req_id": 1 00:41:02.310 } 00:41:02.310 Got JSON-RPC error response 00:41:02.310 response: 00:41:02.310 { 00:41:02.310 "code": -32602, 00:41:02.310 "message": "Invalid parameters" 00:41:02.310 } 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:02.310 01:59:28 keyring_file -- keyring/file.sh@46 -- # bperfpid=122849 00:41:02.310 01:59:28 keyring_file -- keyring/file.sh@48 -- # waitforlisten 122849 /var/tmp/bperf.sock 00:41:02.310 01:59:28 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 122849 ']' 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:02.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:02.310 01:59:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:02.310 [2024-07-12 01:59:28.487969] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:41:02.310 [2024-07-12 01:59:28.488016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122849 ] 00:41:02.310 EAL: No free 2048 kB hugepages reported on node 1 00:41:02.310 [2024-07-12 01:59:28.568697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.310 [2024-07-12 01:59:28.599624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:03.251 01:59:29 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:03.251 01:59:29 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:41:03.251 01:59:29 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:03.251 01:59:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:03.251 01:59:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y2f4E6nLuI 00:41:03.251 01:59:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y2f4E6nLuI 00:41:03.251 01:59:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:41:03.251 01:59:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:41:03.251 01:59:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.251 01:59:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:03.251 01:59:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.511 01:59:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ARKixl2HcP == \/\t\m\p\/\t\m\p\.\A\R\K\i\x\l\2\H\c\P ]] 00:41:03.511 01:59:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:41:03.511 01:59:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.511 01:59:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Y2f4E6nLuI == \/\t\m\p\/\t\m\p\.\Y\2\f\4\E\6\n\L\u\I ]] 00:41:03.511 01:59:29 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:03.511 01:59:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.771 01:59:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:41:03.771 01:59:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:41:03.771 01:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:03.771 01:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:03.771 01:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.771 01:59:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.771 01:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:04.031 01:59:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:04.031 01:59:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:04.031 01:59:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:04.031 [2024-07-12 01:59:30.353762] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:04.291 nvme0n1 00:41:04.291 01:59:30 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:04.291 01:59:30 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:41:04.291 01:59:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:04.291 01:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:04.550 01:59:30 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:41:04.550 01:59:30 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:04.550 Running I/O for 1 seconds... 00:41:05.933 00:41:05.933 Latency(us) 00:41:05.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.933 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:05.933 nvme0n1 : 1.01 11786.01 46.04 0.00 0.00 10800.76 6417.07 18677.76 00:41:05.933 =================================================================================================================== 00:41:05.933 Total : 11786.01 46.04 0.00 0.00 10800.76 6417.07 18677.76 00:41:05.933 0 00:41:05.933 01:59:31 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:05.933 01:59:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:05.933 01:59:32 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:05.933 01:59:32 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:41:05.933 01:59:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:05.933 01:59:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:05.934 01:59:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:05.934 01:59:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:05.934 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.193 01:59:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:06.193 01:59:32 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.193 01:59:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:06.193 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:06.193 [2024-07-12 01:59:32.523159] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:06.193 [2024-07-12 01:59:32.523488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d4f30 (107): Transport endpoint is not connected 00:41:06.193 [2024-07-12 01:59:32.524484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d4f30 (9): Bad file descriptor 00:41:06.193 [2024-07-12 01:59:32.525486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:06.193 [2024-07-12 01:59:32.525494] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:06.193 [2024-07-12 01:59:32.525505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:06.193 request: 00:41:06.193 { 00:41:06.193 "name": "nvme0", 00:41:06.193 "trtype": "tcp", 00:41:06.193 "traddr": "127.0.0.1", 00:41:06.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.193 "adrfam": "ipv4", 00:41:06.193 "trsvcid": "4420", 00:41:06.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.193 "psk": "key1", 00:41:06.194 "method": "bdev_nvme_attach_controller", 00:41:06.194 "req_id": 1 00:41:06.194 } 00:41:06.194 Got JSON-RPC error response 00:41:06.194 response: 00:41:06.194 { 00:41:06.194 "code": -5, 00:41:06.194 "message": "Input/output error" 00:41:06.194 } 00:41:06.194 01:59:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:06.194 01:59:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:06.194 01:59:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:06.194 01:59:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:06.194 01:59:32 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:41:06.194 01:59:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:06.194 01:59:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:06.194 01:59:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:06.194 01:59:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:06.194 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.454 01:59:32 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:41:06.454 01:59:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:41:06.454 01:59:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:06.454 01:59:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:06.454 01:59:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:06.454 01:59:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:06.454 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.713 01:59:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:06.713 01:59:32 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:41:06.713 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:06.713 01:59:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:41:06.713 01:59:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:06.973 01:59:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:41:06.973 01:59:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.973 01:59:33 keyring_file -- keyring/file.sh@77 -- # jq length 00:41:06.973 01:59:33 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:41:06.973 01:59:33 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ARKixl2HcP 00:41:06.973 01:59:33 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.973 01:59:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:06.973 01:59:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:07.233 [2024-07-12 01:59:33.429184] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ARKixl2HcP': 0100660 00:41:07.233 [2024-07-12 01:59:33.429202] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:07.233 request: 00:41:07.233 { 00:41:07.233 "name": "key0", 00:41:07.233 "path": "/tmp/tmp.ARKixl2HcP", 00:41:07.233 "method": "keyring_file_add_key", 00:41:07.233 "req_id": 1 00:41:07.233 } 00:41:07.233 Got JSON-RPC error response 00:41:07.233 response: 00:41:07.233 { 00:41:07.233 "code": -1, 00:41:07.233 "message": "Operation not permitted" 00:41:07.233 } 00:41:07.233 01:59:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:07.233 01:59:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:07.233 01:59:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:07.233 01:59:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:07.233 01:59:33 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ARKixl2HcP 00:41:07.233 01:59:33 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:07.233 01:59:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ARKixl2HcP 00:41:07.493 01:59:33 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ARKixl2HcP 00:41:07.493 01:59:33 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:41:07.493 01:59:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:07.493 01:59:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:07.493 01:59:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:07.493 01:59:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:07.493 01:59:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:07.493 01:59:33 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:41:07.493 01:59:33 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:07.493 01:59:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:07.493 01:59:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:07.753 [2024-07-12 01:59:33.906375] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ARKixl2HcP': No such file or directory 00:41:07.753 [2024-07-12 01:59:33.906389] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:07.753 [2024-07-12 01:59:33.906405] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:07.753 [2024-07-12 01:59:33.906409] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:07.753 [2024-07-12 01:59:33.906414] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:07.753 request: 00:41:07.753 { 00:41:07.753 "name": "nvme0", 00:41:07.753 "trtype": "tcp", 00:41:07.753 "traddr": "127.0.0.1", 00:41:07.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:07.753 "adrfam": "ipv4", 00:41:07.753 "trsvcid": "4420", 00:41:07.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.753 "psk": "key0", 00:41:07.753 "method": "bdev_nvme_attach_controller", 00:41:07.753 "req_id": 1 00:41:07.753 } 00:41:07.753 Got JSON-RPC error response 00:41:07.753 response: 00:41:07.753 { 00:41:07.753 "code": -19, 00:41:07.753 "message": "No such device" 00:41:07.753 } 00:41:07.754 01:59:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:07.754 01:59:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:07.754 01:59:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:07.754 01:59:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:07.754 01:59:33 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:41:07.754 01:59:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:07.754 01:59:34 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5ssQfBClKH 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:07.754 01:59:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:07.754 01:59:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:07.754 01:59:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:07.754 01:59:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:07.754 01:59:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:07.754 01:59:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:07.754 01:59:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5ssQfBClKH 00:41:08.014 01:59:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5ssQfBClKH 00:41:08.014 01:59:34 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.5ssQfBClKH 00:41:08.014 01:59:34 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5ssQfBClKH 00:41:08.014 01:59:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5ssQfBClKH 00:41:08.015 01:59:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:08.015 01:59:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:08.276 nvme0n1 00:41:08.276 01:59:34 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:41:08.276 01:59:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:08.276 01:59:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:08.276 01:59:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:08.276 01:59:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:08.276 01:59:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:08.276 01:59:34 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:41:08.276 01:59:34 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:41:08.276 01:59:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:08.536 01:59:34 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:41:08.536 01:59:34 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:41:08.536 01:59:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:08.536 01:59:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:08.536 01:59:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:08.798 01:59:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:41:08.798 01:59:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:41:08.798 01:59:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:08.798 01:59:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:08.798 01:59:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:08.798 01:59:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:08.798 01:59:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:08.798 01:59:35 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:41:08.798 01:59:35 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:08.798 01:59:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:09.058 01:59:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:41:09.058 01:59:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:09.058 01:59:35 keyring_file -- keyring/file.sh@104 -- # jq length 00:41:09.319 01:59:35 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:41:09.319 01:59:35 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5ssQfBClKH 00:41:09.319 01:59:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5ssQfBClKH 00:41:09.319 01:59:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y2f4E6nLuI 00:41:09.319 01:59:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y2f4E6nLuI 00:41:09.580 01:59:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:09.580 01:59:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:09.841 nvme0n1 00:41:09.841 01:59:35 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:41:09.841 01:59:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:09.841 01:59:36 keyring_file -- keyring/file.sh@112 -- # config='{ 00:41:09.841 "subsystems": [ 00:41:09.841 { 00:41:09.841 "subsystem": "keyring", 00:41:09.841 "config": [ 00:41:09.841 { 00:41:09.841 "method": "keyring_file_add_key", 00:41:09.841 "params": { 00:41:09.841 "name": "key0", 00:41:09.841 "path": "/tmp/tmp.5ssQfBClKH" 00:41:09.841 } 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "method": "keyring_file_add_key", 00:41:09.841 "params": { 00:41:09.841 "name": "key1", 00:41:09.841 "path": "/tmp/tmp.Y2f4E6nLuI" 00:41:09.841 } 00:41:09.841 } 00:41:09.841 ] 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "subsystem": "iobuf", 00:41:09.841 "config": [ 00:41:09.841 { 00:41:09.841 "method": "iobuf_set_options", 00:41:09.841 "params": { 00:41:09.841 "small_pool_count": 8192, 00:41:09.841 "large_pool_count": 1024, 00:41:09.841 "small_bufsize": 8192, 00:41:09.841 "large_bufsize": 135168 00:41:09.841 } 00:41:09.841 } 00:41:09.841 ] 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "subsystem": "sock", 00:41:09.841 "config": [ 00:41:09.841 { 00:41:09.841 "method": "sock_set_default_impl", 00:41:09.841 "params": { 00:41:09.841 "impl_name": "posix" 00:41:09.841 } 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "method": "sock_impl_set_options", 00:41:09.841 "params": { 00:41:09.841 "impl_name": "ssl", 00:41:09.841 "recv_buf_size": 4096, 00:41:09.841 "send_buf_size": 4096, 00:41:09.841 "enable_recv_pipe": true, 00:41:09.841 "enable_quickack": false, 00:41:09.841 "enable_placement_id": 0, 00:41:09.841 "enable_zerocopy_send_server": true, 00:41:09.841 "enable_zerocopy_send_client": false, 00:41:09.841 "zerocopy_threshold": 0, 00:41:09.841 "tls_version": 0, 00:41:09.841 "enable_ktls": false 00:41:09.841 } 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "method": "sock_impl_set_options", 00:41:09.841 "params": { 00:41:09.841 "impl_name": "posix", 00:41:09.841 "recv_buf_size": 2097152, 00:41:09.841 "send_buf_size": 2097152, 00:41:09.841 "enable_recv_pipe": true, 00:41:09.841 "enable_quickack": false, 00:41:09.841 "enable_placement_id": 0, 00:41:09.841 "enable_zerocopy_send_server": true, 00:41:09.841 "enable_zerocopy_send_client": false, 00:41:09.841 "zerocopy_threshold": 0, 00:41:09.841 "tls_version": 0, 00:41:09.841 "enable_ktls": false 00:41:09.841 } 00:41:09.841 } 00:41:09.841 ] 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "subsystem": "vmd", 00:41:09.841 "config": [] 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "subsystem": "accel", 00:41:09.841 "config": [ 00:41:09.841 { 00:41:09.841 "method": "accel_set_options", 00:41:09.841 "params": { 00:41:09.841 "small_cache_size": 128, 00:41:09.841 "large_cache_size": 16, 00:41:09.841 "task_count": 2048, 00:41:09.841 "sequence_count": 2048, 00:41:09.841 "buf_count": 2048 00:41:09.841 } 00:41:09.841 } 00:41:09.841 ] 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "subsystem": "bdev", 00:41:09.841 "config": [ 00:41:09.841 { 00:41:09.841 "method": "bdev_set_options", 00:41:09.841 "params": { 00:41:09.841 "bdev_io_pool_size": 65535, 00:41:09.841 "bdev_io_cache_size": 256, 00:41:09.841 "bdev_auto_examine": true, 00:41:09.841 "iobuf_small_cache_size": 128, 00:41:09.841 "iobuf_large_cache_size": 16 00:41:09.841 } 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "method": "bdev_raid_set_options", 00:41:09.841 "params": { 00:41:09.841 "process_window_size_kb": 1024 00:41:09.841 } 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "method": "bdev_iscsi_set_options", 00:41:09.841 "params": { 00:41:09.841 "timeout_sec": 30 00:41:09.841 } 00:41:09.841 }, 00:41:09.841 { 00:41:09.841 "method": "bdev_nvme_set_options", 00:41:09.841 "params": { 00:41:09.841 "action_on_timeout": "none", 00:41:09.841 "timeout_us": 0, 00:41:09.841 "timeout_admin_us": 0, 00:41:09.841 "keep_alive_timeout_ms": 10000, 00:41:09.841 "arbitration_burst": 0, 00:41:09.841 "low_priority_weight": 0, 00:41:09.841 "medium_priority_weight": 0, 00:41:09.841 "high_priority_weight": 0, 00:41:09.841 "nvme_adminq_poll_period_us": 10000, 00:41:09.841 "nvme_ioq_poll_period_us": 0, 00:41:09.841 "io_queue_requests": 512, 00:41:09.841 "delay_cmd_submit": true, 00:41:09.841 "transport_retry_count": 4, 00:41:09.841 "bdev_retry_count": 3, 00:41:09.841 "transport_ack_timeout": 0, 00:41:09.841 "ctrlr_loss_timeout_sec": 0, 00:41:09.841 "reconnect_delay_sec": 0, 00:41:09.841 "fast_io_fail_timeout_sec": 0, 00:41:09.841 "disable_auto_failback": false, 00:41:09.841 "generate_uuids": false, 00:41:09.841 "transport_tos": 0, 00:41:09.841 "nvme_error_stat": false, 00:41:09.841 "rdma_srq_size": 0, 00:41:09.841 "io_path_stat": false, 00:41:09.841 "allow_accel_sequence": false, 00:41:09.841 "rdma_max_cq_size": 0, 00:41:09.842 "rdma_cm_event_timeout_ms": 0, 00:41:09.842 "dhchap_digests": [ 00:41:09.842 "sha256", 00:41:09.842 "sha384", 00:41:09.842 "sha512" 00:41:09.842 ], 00:41:09.842 "dhchap_dhgroups": [ 00:41:09.842 "null", 00:41:09.842 "ffdhe2048", 00:41:09.842 "ffdhe3072", 00:41:09.842 "ffdhe4096", 00:41:09.842 "ffdhe6144", 00:41:09.842 "ffdhe8192" 00:41:09.842 ] 00:41:09.842 } 00:41:09.842 }, 00:41:09.842 { 00:41:09.842 "method": "bdev_nvme_attach_controller", 00:41:09.842 "params": { 00:41:09.842 "name": "nvme0", 00:41:09.842 "trtype": "TCP", 00:41:09.842 "adrfam": "IPv4", 00:41:09.842 "traddr": "127.0.0.1", 00:41:09.842 "trsvcid": "4420", 00:41:09.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:09.842 "prchk_reftag": false, 00:41:09.842 "prchk_guard": false, 00:41:09.842 "ctrlr_loss_timeout_sec": 0, 00:41:09.842 "reconnect_delay_sec": 0, 00:41:09.842 "fast_io_fail_timeout_sec": 0, 00:41:09.842 "psk": "key0", 00:41:09.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:09.842 "hdgst": false, 00:41:09.842 "ddgst": false 00:41:09.842 } 00:41:09.842 }, 00:41:09.842 { 00:41:09.842 "method": "bdev_nvme_set_hotplug", 00:41:09.842 "params": { 00:41:09.842 "period_us": 100000, 00:41:09.842 "enable": false 00:41:09.842 } 00:41:09.842 }, 00:41:09.842 { 00:41:09.842 "method": "bdev_wait_for_examine" 00:41:09.842 } 00:41:09.842 ] 00:41:09.842 }, 00:41:09.842 { 00:41:09.842 "subsystem": "nbd", 00:41:09.842 "config": [] 00:41:09.842 } 00:41:09.842 ] 00:41:09.842 }' 00:41:09.842 01:59:36 keyring_file -- keyring/file.sh@114 -- # killprocess 122849 00:41:09.842 01:59:36 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 122849 ']' 00:41:09.842 01:59:36 keyring_file -- common/autotest_common.sh@950 -- # kill -0 122849 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@951 -- # uname 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 122849 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 122849' 00:41:10.103 killing process with pid 122849 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@965 -- # kill 122849 00:41:10.103 Received shutdown signal, test time was about 1.000000 seconds 00:41:10.103 00:41:10.103 Latency(us) 00:41:10.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.103 =================================================================================================================== 00:41:10.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@970 -- # wait 122849 00:41:10.103 01:59:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=124338 00:41:10.103 01:59:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 124338 /var/tmp/bperf.sock 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 124338 ']' 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:10.103 01:59:36 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:10.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:10.103 01:59:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:10.103 01:59:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:41:10.103 "subsystems": [ 00:41:10.103 { 00:41:10.103 "subsystem": "keyring", 00:41:10.103 "config": [ 00:41:10.103 { 00:41:10.103 "method": "keyring_file_add_key", 00:41:10.103 "params": { 00:41:10.103 "name": "key0", 00:41:10.103 "path": "/tmp/tmp.5ssQfBClKH" 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "keyring_file_add_key", 00:41:10.103 "params": { 00:41:10.103 "name": "key1", 00:41:10.103 "path": "/tmp/tmp.Y2f4E6nLuI" 00:41:10.103 } 00:41:10.103 } 00:41:10.103 ] 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "subsystem": "iobuf", 00:41:10.103 "config": [ 00:41:10.103 { 00:41:10.103 "method": "iobuf_set_options", 00:41:10.103 "params": { 00:41:10.103 "small_pool_count": 8192, 00:41:10.103 "large_pool_count": 1024, 00:41:10.103 "small_bufsize": 8192, 00:41:10.103 "large_bufsize": 135168 00:41:10.103 } 00:41:10.103 } 00:41:10.103 ] 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "subsystem": "sock", 00:41:10.103 "config": [ 00:41:10.103 { 00:41:10.103 "method": "sock_set_default_impl", 00:41:10.103 "params": { 00:41:10.103 "impl_name": "posix" 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "sock_impl_set_options", 00:41:10.103 "params": { 00:41:10.103 "impl_name": "ssl", 00:41:10.103 "recv_buf_size": 4096, 00:41:10.103 "send_buf_size": 4096, 00:41:10.103 "enable_recv_pipe": true, 00:41:10.103 "enable_quickack": false, 00:41:10.103 "enable_placement_id": 0, 00:41:10.103 "enable_zerocopy_send_server": true, 00:41:10.103 "enable_zerocopy_send_client": false, 00:41:10.103 "zerocopy_threshold": 0, 00:41:10.103 "tls_version": 0, 00:41:10.103 "enable_ktls": false 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "sock_impl_set_options", 00:41:10.103 "params": { 00:41:10.103 "impl_name": "posix", 00:41:10.103 "recv_buf_size": 2097152, 00:41:10.103 "send_buf_size": 2097152, 00:41:10.103 "enable_recv_pipe": true, 00:41:10.103 "enable_quickack": false, 00:41:10.103 "enable_placement_id": 0, 00:41:10.103 "enable_zerocopy_send_server": true, 00:41:10.103 "enable_zerocopy_send_client": false, 00:41:10.103 "zerocopy_threshold": 0, 00:41:10.103 "tls_version": 0, 00:41:10.103 "enable_ktls": false 00:41:10.103 } 00:41:10.103 } 00:41:10.103 ] 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "subsystem": "vmd", 00:41:10.103 "config": [] 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "subsystem": "accel", 00:41:10.103 "config": [ 00:41:10.103 { 00:41:10.103 "method": "accel_set_options", 00:41:10.103 "params": { 00:41:10.103 "small_cache_size": 128, 00:41:10.103 "large_cache_size": 16, 00:41:10.103 "task_count": 2048, 00:41:10.103 "sequence_count": 2048, 00:41:10.103 "buf_count": 2048 00:41:10.103 } 00:41:10.103 } 00:41:10.103 ] 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "subsystem": "bdev", 00:41:10.103 "config": [ 00:41:10.103 { 00:41:10.103 "method": "bdev_set_options", 00:41:10.103 "params": { 00:41:10.103 "bdev_io_pool_size": 65535, 00:41:10.103 "bdev_io_cache_size": 256, 00:41:10.103 "bdev_auto_examine": true, 00:41:10.103 "iobuf_small_cache_size": 128, 00:41:10.103 "iobuf_large_cache_size": 16 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "bdev_raid_set_options", 00:41:10.103 "params": { 00:41:10.103 "process_window_size_kb": 1024 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "bdev_iscsi_set_options", 00:41:10.103 "params": { 00:41:10.103 "timeout_sec": 30 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "bdev_nvme_set_options", 00:41:10.103 "params": { 00:41:10.103 "action_on_timeout": "none", 00:41:10.103 "timeout_us": 0, 00:41:10.103 "timeout_admin_us": 0, 00:41:10.103 "keep_alive_timeout_ms": 10000, 00:41:10.103 "arbitration_burst": 0, 00:41:10.103 "low_priority_weight": 0, 00:41:10.103 "medium_priority_weight": 0, 00:41:10.103 "high_priority_weight": 0, 00:41:10.103 "nvme_adminq_poll_period_us": 10000, 00:41:10.103 "nvme_ioq_poll_period_us": 0, 00:41:10.103 "io_queue_requests": 512, 00:41:10.103 "delay_cmd_submit": true, 00:41:10.103 "transport_retry_count": 4, 00:41:10.103 "bdev_retry_count": 3, 00:41:10.103 "transport_ack_timeout": 0, 00:41:10.103 "ctrlr_loss_timeout_sec": 0, 00:41:10.103 "reconnect_delay_sec": 0, 00:41:10.103 "fast_io_fail_timeout_sec": 0, 00:41:10.103 "disable_auto_failback": false, 00:41:10.103 "generate_uuids": false, 00:41:10.103 "transport_tos": 0, 00:41:10.103 "nvme_error_stat": false, 00:41:10.103 "rdma_srq_size": 0, 00:41:10.103 "io_path_stat": false, 00:41:10.103 "allow_accel_sequence": false, 00:41:10.103 "rdma_max_cq_size": 0, 00:41:10.103 "rdma_cm_event_timeout_ms": 0, 00:41:10.103 "dhchap_digests": [ 00:41:10.103 "sha256", 00:41:10.103 "sha384", 00:41:10.103 "sha512" 00:41:10.103 ], 00:41:10.103 "dhchap_dhgroups": [ 00:41:10.103 "null", 00:41:10.103 "ffdhe2048", 00:41:10.103 "ffdhe3072", 00:41:10.103 "ffdhe4096", 00:41:10.103 "ffdhe6144", 00:41:10.103 "ffdhe8192" 00:41:10.103 ] 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "bdev_nvme_attach_controller", 00:41:10.103 "params": { 00:41:10.103 "name": "nvme0", 00:41:10.103 "trtype": "TCP", 00:41:10.103 "adrfam": "IPv4", 00:41:10.103 "traddr": "127.0.0.1", 00:41:10.103 "trsvcid": "4420", 00:41:10.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:10.103 "prchk_reftag": false, 00:41:10.103 "prchk_guard": false, 00:41:10.103 "ctrlr_loss_timeout_sec": 0, 00:41:10.103 "reconnect_delay_sec": 0, 00:41:10.103 "fast_io_fail_timeout_sec": 0, 00:41:10.103 "psk": "key0", 00:41:10.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:10.103 "hdgst": false, 00:41:10.103 "ddgst": false 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "bdev_nvme_set_hotplug", 00:41:10.103 "params": { 00:41:10.103 "period_us": 100000, 00:41:10.103 "enable": false 00:41:10.103 } 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "method": "bdev_wait_for_examine" 00:41:10.103 } 00:41:10.103 ] 00:41:10.103 }, 00:41:10.103 { 00:41:10.103 "subsystem": "nbd", 00:41:10.103 "config": [] 00:41:10.103 } 00:41:10.103 ] 00:41:10.103 }' 00:41:10.103 [2024-07-12 01:59:36.402494] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:41:10.103 [2024-07-12 01:59:36.402553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124338 ] 00:41:10.103 EAL: No free 2048 kB hugepages reported on node 1 00:41:10.363 [2024-07-12 01:59:36.484060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.363 [2024-07-12 01:59:36.512476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:10.363 [2024-07-12 01:59:36.649073] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:10.935 01:59:37 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:10.935 01:59:37 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:41:10.935 01:59:37 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:41:10.935 01:59:37 keyring_file -- keyring/file.sh@120 -- # jq length 00:41:10.935 01:59:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:11.196 01:59:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:41:11.196 01:59:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:11.196 01:59:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:11.196 01:59:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:11.196 01:59:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:41:11.456 01:59:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5ssQfBClKH /tmp/tmp.Y2f4E6nLuI 00:41:11.456 01:59:37 keyring_file -- keyring/file.sh@20 -- # killprocess 124338 00:41:11.456 01:59:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 124338 ']' 00:41:11.456 01:59:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 124338 00:41:11.456 01:59:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:41:11.456 01:59:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124338 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124338' 00:41:11.718 killing process with pid 124338 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@965 -- # kill 124338 00:41:11.718 Received shutdown signal, test time was about 1.000000 seconds 00:41:11.718 00:41:11.718 Latency(us) 00:41:11.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:11.718 =================================================================================================================== 00:41:11.718 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@970 -- # wait 124338 00:41:11.718 01:59:37 keyring_file -- keyring/file.sh@21 -- # killprocess 122637 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 122637 ']' 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 122637 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:11.718 01:59:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 122637 00:41:11.718 01:59:38 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:41:11.718 01:59:38 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:41:11.718 01:59:38 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 122637' 00:41:11.718 killing process with pid 122637 00:41:11.718 01:59:38 keyring_file -- common/autotest_common.sh@965 -- # kill 122637 00:41:11.718 [2024-07-12 01:59:38.019417] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:41:11.718 01:59:38 keyring_file -- common/autotest_common.sh@970 -- # wait 122637 00:41:11.979 00:41:11.979 real 0m10.882s 00:41:11.979 user 0m25.843s 00:41:11.979 sys 0m2.522s 00:41:11.979 01:59:38 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:11.979 01:59:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:11.979 ************************************ 00:41:11.979 END TEST keyring_file 00:41:11.979 ************************************ 00:41:11.979 01:59:38 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:41:11.979 01:59:38 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:11.979 01:59:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:41:11.979 01:59:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:11.979 01:59:38 -- common/autotest_common.sh@10 -- # set +x 00:41:11.979 ************************************ 00:41:11.979 START TEST keyring_linux 00:41:11.979 ************************************ 00:41:11.979 01:59:38 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:12.240 * Looking for test storage... 00:41:12.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:12.241 01:59:38 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:12.241 01:59:38 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:12.241 01:59:38 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:12.241 01:59:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.241 01:59:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.241 01:59:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.241 01:59:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:12.241 01:59:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@705 -- # python - 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:12.241 /tmp/:spdk-test:key0 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:41:12.241 01:59:38 keyring_linux -- nvmf/common.sh@705 -- # python - 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:12.241 01:59:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:12.241 /tmp/:spdk-test:key1 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=124870 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 124870 00:41:12.241 01:59:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:12.241 01:59:38 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 124870 ']' 00:41:12.241 01:59:38 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:12.241 01:59:38 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:12.241 01:59:38 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:12.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:12.241 01:59:38 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:12.241 01:59:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:12.241 [2024-07-12 01:59:38.569957] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:41:12.241 [2024-07-12 01:59:38.570031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124870 ] 00:41:12.502 EAL: No free 2048 kB hugepages reported on node 1 00:41:12.502 [2024-07-12 01:59:38.644623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.502 [2024-07-12 01:59:38.683521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:41:13.073 01:59:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:13.073 [2024-07-12 01:59:39.355336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:13.073 null0 00:41:13.073 [2024-07-12 01:59:39.387376] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:13.073 [2024-07-12 01:59:39.387763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.073 01:59:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:13.073 685257904 00:41:13.073 01:59:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:13.073 1020630365 00:41:13.073 01:59:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=125095 00:41:13.073 01:59:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 125095 /var/tmp/bperf.sock 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 125095 ']' 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:13.073 01:59:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:13.073 01:59:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:13.334 [2024-07-12 01:59:39.460518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:41:13.334 [2024-07-12 01:59:39.460565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125095 ] 00:41:13.334 EAL: No free 2048 kB hugepages reported on node 1 00:41:13.334 [2024-07-12 01:59:39.539241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.334 [2024-07-12 01:59:39.567791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:13.907 01:59:40 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:13.907 01:59:40 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:41:13.907 01:59:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:13.907 01:59:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:14.168 01:59:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:14.168 01:59:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:14.429 01:59:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:14.429 01:59:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:14.429 [2024-07-12 01:59:40.701395] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:14.429 nvme0n1 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:14.690 01:59:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:14.690 01:59:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:14.690 01:59:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:14.690 01:59:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:14.690 01:59:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@25 -- # sn=685257904 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 685257904 == \6\8\5\2\5\7\9\0\4 ]] 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 685257904 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:14.952 01:59:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:14.952 Running I/O for 1 seconds... 00:41:15.893 00:41:15.893 Latency(us) 00:41:15.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.893 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:15.893 nvme0n1 : 1.01 12342.82 48.21 0.00 0.00 10314.53 8246.61 16711.68 00:41:15.893 =================================================================================================================== 00:41:15.893 Total : 12342.82 48.21 0.00 0.00 10314.53 8246.61 16711.68 00:41:15.893 0 00:41:15.893 01:59:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:15.893 01:59:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:16.153 01:59:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:41:16.153 01:59:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:41:16.153 01:59:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:16.153 01:59:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:16.153 01:59:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.153 01:59:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:16.413 01:59:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:16.413 [2024-07-12 01:59:42.698332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:16.413 [2024-07-12 01:59:42.698663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x607a90 (107): Transport endpoint is not connected 00:41:16.413 [2024-07-12 01:59:42.699659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x607a90 (9): Bad file descriptor 00:41:16.413 [2024-07-12 01:59:42.700660] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:16.413 [2024-07-12 01:59:42.700672] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:16.413 [2024-07-12 01:59:42.700677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:16.413 request: 00:41:16.413 { 00:41:16.413 "name": "nvme0", 00:41:16.413 "trtype": "tcp", 00:41:16.413 "traddr": "127.0.0.1", 00:41:16.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:16.413 "adrfam": "ipv4", 00:41:16.413 "trsvcid": "4420", 00:41:16.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:16.413 "psk": ":spdk-test:key1", 00:41:16.413 "method": "bdev_nvme_attach_controller", 00:41:16.413 "req_id": 1 00:41:16.413 } 00:41:16.413 Got JSON-RPC error response 00:41:16.413 response: 00:41:16.413 { 00:41:16.413 "code": -5, 00:41:16.413 "message": "Input/output error" 00:41:16.413 } 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@33 -- # sn=685257904 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 685257904 00:41:16.413 1 links removed 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@33 -- # sn=1020630365 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1020630365 00:41:16.413 1 links removed 00:41:16.413 01:59:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 125095 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 125095 ']' 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 125095 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:16.413 01:59:42 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125095 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125095' 00:41:16.672 killing process with pid 125095 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@965 -- # kill 125095 00:41:16.672 Received shutdown signal, test time was about 1.000000 seconds 00:41:16.672 00:41:16.672 Latency(us) 00:41:16.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:16.672 =================================================================================================================== 00:41:16.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@970 -- # wait 125095 00:41:16.672 01:59:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 124870 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 124870 ']' 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 124870 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124870 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124870' 00:41:16.672 killing process with pid 124870 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@965 -- # kill 124870 00:41:16.672 01:59:42 keyring_linux -- common/autotest_common.sh@970 -- # wait 124870 00:41:16.932 00:41:16.932 real 0m4.848s 00:41:16.932 user 0m8.428s 00:41:16.932 sys 0m1.486s 00:41:16.932 01:59:43 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:16.932 01:59:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:16.932 ************************************ 00:41:16.932 END TEST keyring_linux 00:41:16.932 ************************************ 00:41:16.932 01:59:43 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:41:16.932 01:59:43 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:41:16.932 01:59:43 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:41:16.932 01:59:43 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:41:16.932 01:59:43 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:41:16.932 01:59:43 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:41:16.932 01:59:43 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:41:16.932 01:59:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:41:16.932 01:59:43 -- common/autotest_common.sh@10 -- # set +x 00:41:16.932 01:59:43 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:41:16.932 01:59:43 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:41:16.932 01:59:43 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:41:16.932 01:59:43 -- common/autotest_common.sh@10 -- # set +x 00:41:25.169 INFO: APP EXITING 00:41:25.169 INFO: killing all VMs 00:41:25.169 INFO: killing vhost app 00:41:25.169 INFO: EXIT DONE 00:41:28.469 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:65:00.0 (144d a80a): Already using the nvme driver 00:41:28.469 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:41:28.469 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:32.681 Cleaning 00:41:32.681 Removing: /var/run/dpdk/spdk0/config 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:32.681 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:32.681 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:32.681 Removing: /var/run/dpdk/spdk1/config 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:32.681 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:32.681 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:32.681 Removing: /var/run/dpdk/spdk1/mp_socket 00:41:32.681 Removing: /var/run/dpdk/spdk2/config 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:32.681 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:32.681 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:32.681 Removing: /var/run/dpdk/spdk3/config 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:32.681 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:32.681 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:32.681 Removing: /var/run/dpdk/spdk4/config 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:32.681 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:32.681 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:32.681 Removing: /dev/shm/bdev_svc_trace.1 00:41:32.681 Removing: /dev/shm/nvmf_trace.0 00:41:32.681 Removing: /dev/shm/spdk_tgt_trace.pid3735204 00:41:32.681 Removing: /var/run/dpdk/spdk0 00:41:32.681 Removing: /var/run/dpdk/spdk1 00:41:32.681 Removing: /var/run/dpdk/spdk2 00:41:32.681 Removing: /var/run/dpdk/spdk3 00:41:32.681 Removing: /var/run/dpdk/spdk4 00:41:32.681 Removing: /var/run/dpdk/spdk_pid100572 00:41:32.681 Removing: /var/run/dpdk/spdk_pid101799 00:41:32.681 Removing: /var/run/dpdk/spdk_pid112428 00:41:32.681 Removing: /var/run/dpdk/spdk_pid112978 00:41:32.681 Removing: /var/run/dpdk/spdk_pid113650 00:41:32.681 Removing: /var/run/dpdk/spdk_pid116704 00:41:32.681 Removing: /var/run/dpdk/spdk_pid117187 00:41:32.681 Removing: /var/run/dpdk/spdk_pid117729 00:41:32.681 Removing: /var/run/dpdk/spdk_pid122637 00:41:32.681 Removing: /var/run/dpdk/spdk_pid122849 00:41:32.682 Removing: /var/run/dpdk/spdk_pid124338 00:41:32.682 Removing: /var/run/dpdk/spdk_pid124870 00:41:32.682 Removing: /var/run/dpdk/spdk_pid125095 00:41:32.682 Removing: /var/run/dpdk/spdk_pid12870 00:41:32.682 Removing: /var/run/dpdk/spdk_pid22951 00:41:32.682 Removing: /var/run/dpdk/spdk_pid32483 00:41:32.682 Removing: /var/run/dpdk/spdk_pid32513 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3733611 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3735204 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3735732 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3736906 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3737104 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3738419 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3738610 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3738886 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3740305 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3741001 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3741283 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3741551 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3741936 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3742326 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3742684 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3742887 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3743106 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3744485 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3747730 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3748088 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3748216 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3748469 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3748842 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3749158 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3749548 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3749661 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3749926 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3750132 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3750302 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3750584 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3751071 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3751274 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3751509 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3751865 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3751892 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3752129 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3752313 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3752660 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3753007 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3753281 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3753430 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3753746 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3754095 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3754433 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3754582 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3754831 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3755179 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3755529 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3755745 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3755927 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3756264 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3756611 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3756923 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3757101 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3757356 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3757712 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3757780 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3758174 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3762997 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3862455 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3868133 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3880677 00:41:32.682 Removing: /var/run/dpdk/spdk_pid3887803 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3893158 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3893833 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3908530 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3908580 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3909586 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3910608 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3911634 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3912265 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3912410 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3912626 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3912880 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3912883 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3913888 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3914893 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3915903 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3916572 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3916575 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3916913 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3918340 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3919520 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3930650 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3931004 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3936587 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3943785 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3946857 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3960062 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3971770 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3973777 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3974792 00:41:32.943 Removing: /var/run/dpdk/spdk_pid3997041 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4002348 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4033861 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4039620 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4041467 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4043600 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4043799 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4043808 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4043910 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4044376 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4046546 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4047483 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4047989 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4050363 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4051072 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4051785 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4057198 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4064208 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4069905 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4116526 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4121312 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4129476 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4130974 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4132681 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4138263 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4143649 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4153728 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4153742 00:41:32.943 Removing: /var/run/dpdk/spdk_pid4159135 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4159411 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4159513 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4160142 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4160155 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4161509 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4163507 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4165413 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4167251 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4169183 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4171285 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4179461 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4180278 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4181351 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4182638 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4189183 00:41:33.204 Removing: /var/run/dpdk/spdk_pid4192222 00:41:33.204 Removing: /var/run/dpdk/spdk_pid55958 00:41:33.204 Removing: /var/run/dpdk/spdk_pid56724 00:41:33.204 Removing: /var/run/dpdk/spdk_pid57487 00:41:33.204 Removing: /var/run/dpdk/spdk_pid58204 00:41:33.204 Removing: /var/run/dpdk/spdk_pid5862 00:41:33.204 Removing: /var/run/dpdk/spdk_pid59108 00:41:33.204 Removing: /var/run/dpdk/spdk_pid59832 00:41:33.204 Removing: /var/run/dpdk/spdk_pid60544 00:41:33.204 Removing: /var/run/dpdk/spdk_pid61290 00:41:33.204 Removing: /var/run/dpdk/spdk_pid66831 00:41:33.205 Removing: /var/run/dpdk/spdk_pid67162 00:41:33.205 Removing: /var/run/dpdk/spdk_pid74568 00:41:33.205 Removing: /var/run/dpdk/spdk_pid74941 00:41:33.205 Removing: /var/run/dpdk/spdk_pid77463 00:41:33.205 Removing: /var/run/dpdk/spdk_pid85683 00:41:33.205 Removing: /var/run/dpdk/spdk_pid85744 00:41:33.205 Removing: /var/run/dpdk/spdk_pid92164 00:41:33.205 Removing: /var/run/dpdk/spdk_pid94469 00:41:33.205 Removing: /var/run/dpdk/spdk_pid96864 00:41:33.205 Removing: /var/run/dpdk/spdk_pid98049 00:41:33.205 Clean 00:41:33.205 01:59:59 -- common/autotest_common.sh@1447 -- # return 0 00:41:33.205 01:59:59 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:41:33.205 01:59:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:33.205 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:41:33.466 01:59:59 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:41:33.466 01:59:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:33.466 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:41:33.466 01:59:59 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:33.466 01:59:59 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:33.466 01:59:59 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:33.466 01:59:59 -- spdk/autotest.sh@391 -- # hash lcov 00:41:33.466 01:59:59 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:33.466 01:59:59 -- spdk/autotest.sh@393 -- # hostname 00:41:33.466 01:59:59 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:33.466 geninfo: WARNING: invalid characters removed from testname! 00:42:00.053 02:00:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:01.442 02:00:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:03.356 02:00:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:05.268 02:00:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:06.653 02:00:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:08.562 02:00:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:09.948 02:00:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:09.948 02:00:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:09.948 02:00:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:09.948 02:00:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:09.948 02:00:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:09.948 02:00:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.948 02:00:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.948 02:00:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.948 02:00:36 -- paths/export.sh@5 -- $ export PATH 00:42:09.948 02:00:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.948 02:00:36 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:42:09.948 02:00:36 -- common/autobuild_common.sh@437 -- $ date +%s 00:42:09.948 02:00:36 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720742436.XXXXXX 00:42:09.948 02:00:36 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720742436.2GrLE3 00:42:09.948 02:00:36 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:42:09.948 02:00:36 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:42:09.948 02:00:36 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:42:09.948 02:00:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:42:09.948 02:00:36 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:09.948 02:00:36 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:09.948 02:00:36 -- common/autobuild_common.sh@453 -- $ get_config_params 00:42:09.948 02:00:36 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:42:09.948 02:00:36 -- common/autotest_common.sh@10 -- $ set +x 00:42:09.948 02:00:36 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:42:09.948 02:00:36 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:42:09.948 02:00:36 -- pm/common@17 -- $ local monitor 00:42:09.948 02:00:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:09.948 02:00:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:09.948 02:00:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:09.948 02:00:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:09.948 02:00:36 -- pm/common@21 -- $ date +%s 00:42:09.948 02:00:36 -- pm/common@25 -- $ sleep 1 00:42:09.948 02:00:36 -- pm/common@21 -- $ date +%s 00:42:09.948 02:00:36 -- pm/common@21 -- $ date +%s 00:42:09.948 02:00:36 -- pm/common@21 -- $ date +%s 00:42:09.948 02:00:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720742436 00:42:09.949 02:00:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720742436 00:42:09.949 02:00:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720742436 00:42:09.949 02:00:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720742436 00:42:09.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720742436_collect-vmstat.pm.log 00:42:09.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720742436_collect-cpu-load.pm.log 00:42:09.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720742436_collect-cpu-temp.pm.log 00:42:09.949 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720742436_collect-bmc-pm.bmc.pm.log 00:42:10.889 02:00:37 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:42:10.889 02:00:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:42:10.889 02:00:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:10.889 02:00:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:10.889 02:00:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:10.889 02:00:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:10.889 02:00:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:10.889 02:00:37 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:10.889 02:00:37 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:10.889 02:00:37 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:10.889 02:00:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:10.889 02:00:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:10.889 02:00:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:10.889 02:00:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:10.889 02:00:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:10.889 02:00:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:10.889 02:00:37 -- pm/common@44 -- $ pid=139236 00:42:10.889 02:00:37 -- pm/common@50 -- $ kill -TERM 139236 00:42:10.889 02:00:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:10.889 02:00:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:10.889 02:00:37 -- pm/common@44 -- $ pid=139237 00:42:10.889 02:00:37 -- pm/common@50 -- $ kill -TERM 139237 00:42:10.889 02:00:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:10.889 02:00:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:10.889 02:00:37 -- pm/common@44 -- $ pid=139239 00:42:10.889 02:00:37 -- pm/common@50 -- $ kill -TERM 139239 00:42:10.889 02:00:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:10.889 02:00:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:10.889 02:00:37 -- pm/common@44 -- $ pid=139267 00:42:10.889 02:00:37 -- pm/common@50 -- $ sudo -E kill -TERM 139267 00:42:10.889 + [[ -n 3596337 ]] 00:42:10.889 + sudo kill 3596337 00:42:10.901 [Pipeline] } 00:42:10.921 [Pipeline] // stage 00:42:10.927 [Pipeline] } 00:42:10.944 [Pipeline] // timeout 00:42:10.949 [Pipeline] } 00:42:10.967 [Pipeline] // catchError 00:42:10.972 [Pipeline] } 00:42:10.991 [Pipeline] // wrap 00:42:10.999 [Pipeline] } 00:42:11.017 [Pipeline] // catchError 00:42:11.027 [Pipeline] stage 00:42:11.030 [Pipeline] { (Epilogue) 00:42:11.047 [Pipeline] catchError 00:42:11.049 [Pipeline] { 00:42:11.067 [Pipeline] echo 00:42:11.069 Cleanup processes 00:42:11.076 [Pipeline] sh 00:42:11.365 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:11.365 139346 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:42:11.365 139785 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:11.385 [Pipeline] sh 00:42:11.678 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:11.678 ++ grep -v 'sudo pgrep' 00:42:11.678 ++ awk '{print $1}' 00:42:11.678 + sudo kill -9 139346 00:42:11.691 [Pipeline] sh 00:42:11.978 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:24.273 [Pipeline] sh 00:42:24.559 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:24.559 Artifacts sizes are good 00:42:24.574 [Pipeline] archiveArtifacts 00:42:24.581 Archiving artifacts 00:42:24.828 [Pipeline] sh 00:42:25.120 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:42:25.136 [Pipeline] cleanWs 00:42:25.147 [WS-CLEANUP] Deleting project workspace... 00:42:25.147 [WS-CLEANUP] Deferred wipeout is used... 00:42:25.154 [WS-CLEANUP] done 00:42:25.157 [Pipeline] } 00:42:25.183 [Pipeline] // catchError 00:42:25.201 [Pipeline] sh 00:42:25.499 + logger -p user.info -t JENKINS-CI 00:42:25.508 [Pipeline] } 00:42:25.523 [Pipeline] // stage 00:42:25.528 [Pipeline] } 00:42:25.543 [Pipeline] // node 00:42:25.548 [Pipeline] End of Pipeline 00:42:25.585 Finished: SUCCESS